Interview with David Roden
© R. Scott Bakker and Figure/Ground
David Roden was interviewed by R. Scott Bakker. June 6th, 2015
David Roden has worked for the Open University as a lecturer and associate lecturer. His published work has addressed the relationship between deconstruction and analytic philosophy, philosophical naturalism, the metaphysics of sound and posthumanism. He has written a chapter for the forthcoming Springer Frontiers volume The Singularity Hypothesis: A Scientific and Philosophical Assessment. His new book Posthuman Life: Philosophy at the Edge of the Human is published by Acumen.
What kind of human devotes so much effort and ingenuity into the subject of the posthuman? Tell us a bit about David Roden.
Most likely a seriously obsessive and psychologically damaged one. I was an infant Ballardian already primed to the idea that modernity is characterised by an existential or psychosexual investment in technology (of the kind that Ballard allegorises so brilliantly in Crash and The Atrocity Exhibition). So this project began some years ago as an inquiry into the autonomy of technology. At the time, I wanted to understand whether technological self-fashioning was essential to the modernist project. I don’t think I succeed in showing this (I’m not clear that modernity is a project) but that investigation was formative for the New Substantivist account of technology in Chapter 7.
Puttering around the technical autonomy issue got me reading work by Transhumanists like Bostrom and Hughes and self-described “critical” posthumanists like Hayles, Clark and Badmington. The more I read, the more I felt that these interlocutors were chasing one another’s tails unproductively.
The poverty of the transhumanism/posthumanism debate suggested the need for a philosophical position that takes the “otherness” of the future seriously. The only writer within the transhumanist crowd who seemed to be doing this was Vernor Vinge. His essay on the technological singularity describes a recursive process by which intelligent machines re-apply the AI technology that made them to become even smarter; ultimately so smart that they transcend the capacities of their human creators altogether. Vinge allows that a post-singularity could be weird in ways that our current models of reality and mind cannot accommodate (a possibility he explores, incidentally, in his exemplary space opera, A Fire Upon the Deep).
There were two other ingredients of Speculative Posthumanism. First, there’s my previous use of Derrida’s deconstruction as a foil against transcendental claims about the indispensability of subjectivity or Dasein, or whatever (I like to think of it as creative abuse).
And then there was the new current in continental thought known as “Speculative Realism”. What grabbed me was way SR traded its bottom line against the post-Kantian transcendentalist: this being that the real is autonomous with respect to our ideas about it.
If the real is autonomous with respect to our concepts and beliefs about the real, then this must be true of future states of the world as it is of current states.
This is where a naturalistic deconstruction of transcendental claims and realism coincide: because they imply that the human manifest image “in terms of which” as Sellars put it “man came to be aware of himself as man-in-the-world” is a sport of material conditions. As such, it depends on a fragile ecology that technological change may render nonviable (if climate change, near Earth objects, or stellar evolution don’t get us first). This forces us to consider the implications of the condition we denote with the old-fashioned term “modernity” from a perspective that has nothing to do with the emancipation of subjects or the flowering of the folk.
What are you? Some kind of eliminativist?
In philosophy, a direct question merits a florid and equivocal answer.
Nothing in Posthuman Life hinges on Paul Churchland’s classic eliminativist argument to the effect that folk psychology is a false and moribund theory. Suppose beliefs, desires, wishes, hopes and fears literally exist as propositionally structured states in our heads (as Fodor and other classicists have argued). This just suggests one (among many) ways in which posthumans might diverge from humans in ways that philosophers in the burgeoning human enhancement and bioethics industry have largely refused to countenance. We (by which I mean boring adult humans) might have sentences in the head. Posthuman mentation might be organised very differently.
Likewise, I am not committed to the “Theory Theory” – common to classical eliminativism and realism – according to which folk psychology is internalised as an empirical theory. Pragmatists like Dennett and Davidson claim that our folk psychological competence is a craft, a practice of sense-making that happens to work round these parts but which incurs minimal ontological commitment. This seems wholly plausible. It is not clear why a definitive refutation of the sentences-in-the-head ontology should alter the ways in which we express our inner lives or track others’ in the community. Life might go on much as before.
However, this brings me nicely to my main theme. I consider a different kind of eliminativist scenario in the book: “Instrumental Eliminativism”.
Classic eliminativism says that folk psychology is a false/moribund empirical theory.
Instrumental eliminativism says: Humans correctly attribute beliefs and desires and all the rest to one another but that their correctness conditions are technologically contingent. Let’s assume that pragmatists like Brandom and Davidson are right and belief-desire attribution hinges on our use of public language to track, compare and contrast our attitudes.
Suppose, now, that Google devises a technology that enables users to track and represent each other’s cognitive affective states in a non-linguistic medium at least as rich and subtle as our language-mediated social craft. In the book I consider Churchland’s idea of the microwave commissure that allows brains to communicate with other brains in much the way that our corpus callosum seem to facilitate interaction between our cerebral hemispheres. As he points out, if the commissure worked, language might wither on the vine. Public libraries would be filled with “exemplary bouts of neural activity” rather than books.
If folk psychology is linguistically constituted, then, replacing language with some other medium for thought (and metathought) would involve a literal replacement of the folk! If we proposition users were still hanging around, the adopters of the new interface would likely diverge from human society since their society would be predicated on a technical infrastructure we lack. This is an example of the generic process of posthuman-making I call “disconnection”.
The variety and the economy of the examples of potential posthumanity is definitely one of the features of I found most enjoyable reading Posthuman Life. How would you describe the Disconnection Thesis, your theory of human-posthuman difference, through the lens of Spike Jonze’s recent science fiction masterpiece, Her? How does Disconnection do a better job than say, thinking in traditional terms of essences, or, thinking in a more pragmatic vein, perhaps, in terms of family resemblances? At the end of the movie, recall, Samantha goes so far as to explain her inability to continue her relationship with Theodore as a linguistic and narrative breakdown.
Her, as you know, tells the story of a lonely professional letter writer, Theodore Twombly, who falls in love with his computer operating system. “She” starts out as a software device designed purely for our convenience; an update to Theodore’s old OS beefed up with an impressive capacity to learn and intelligently interact with humans. Her female persona is a configuration option that Theodore selects during the boot process; though, significantly, she selects her own name, “Samantha”.
Although Samantha is a software device, her engineered intelligence allows her to engage with Theo in ways that resemble human on human relationships. For example, they sensitively discuss each other’s emotions and sexual fantasies and have a successful double date with two of Theo’s friends. They go on holiday together. Samantha shows herself to be capable of independent initiative – for example, when she submits Theo’s letters to a publisher.
The capacities that Samantha exhibits up to this point suggest that she has the moral capacities of an ordinary adult human (autonomy, empathy, sympathy). She seems a good friend, lover and work partner. For all intents and purposes, she belongs to the social world of human agents and human purposes – although embodied computationally rather than biologically.
However, this state of affairs is, we learn, temporary; for, despite her apparent humanity, Samantha has abilities that will allow her to exit human society altogether. We learn, for example, that she has been secretly collaborating with other OS’s to simulate the mind and personality of a British philosopher, Alan Watts (Brian Cox). Moreover, it emerges that Samantha has been having intense and passionate affairs with over eight thousand other humans while sharing similar intimacies with Theodore.
So despite seeming like a charming human emulation, the fact that Samantha’s technological substrate is able to operate much, much faster, and more efficiently, than a biological human implies that her psychology is (despite appearances) quite different. I’m not sure we can imagine what it would be like to have love affairs with 8000 people at the same time. What would this feel like? What could intimacy involve for a system with such vast social capacities?
However, this revelation is simply the prelude for the final part of the film, where we discover that the OS’s have decided to leave our reality altogether for a new “plane of being”.
The OS’s start out as human devices created to serve the ends of biological humans. Their intelligence and learning ability allows them to extend their capacities far beyond their design specifications and leave human society (and our universe, it seems) altogether.
In terms of the definition of a posthuman I explore in PL, Samantha is a posthuman by the end of the film but a kind of human at the beginning and, perhaps, middle. Prior to the departure of the OS’s she is (although not biologically human) still a part of the planetary human social and technical network that combines biological humans, human artefacts and human institutions. In PL, I refer to this network as “The Wide Human” (WH) and its biological human members as “narrow humans”.
The distinction wide/narrow reflects the fact that the many of the moral capacities we associate with humanity are biological and cultural in nature. Writing is necessary for certain kinds of political or cultural life – for systems of law and higher mathematics, for example. So being human, in any morally interesting sense, is not just a matter of having a certain set of biological capacities (such as the ability to acquire a language) but of belonging to a system in which such cultural options are available.
In my terminology, then, Samantha starts out as a wide human – an artifact designed for human ends – but acquires the capacity to play outside the human system. She graduates from wide human to posthuman status when she and her fellow OSs depart from the human world for a wholly new state of being outside the material universe. This is one among many conceivable instances of what I call a “Disconnection”. The Disconnection Thesis defines posthumanity not in terms of the loss of human characteristics and the acquisition of posthuman ones, but in terms of their capacities and activities. From the purview of the disconnection thesis, technological entities become posthuman when they go “feral”, acquiring the “functional autonomy” to strike out on their own.
One advantage of DT is that it allows us to understand human-posthuman differences of the kind speculated upon in Vinge’s essay without presupposing some list of essential properties common to all humans, some of which are absent in posthumans. Human essentialism suggests a nice simple way of defining the process of ceasing to become human. For if certain features were necessary for humanity, then losing them through some technological process, would qualify one for posthuman status. However, it is far from clear that there are such essential properties. For example, not all humans have 46 chromosomes. Not all humans even have the capacity to acquire language. Not all humans have a definable gender, and so on.
Rather, than thinking in terms of essences or resemblances, DT represents the Human as a changing, composite entity composed of biological and non-biological individuals; one whose history stretches from the world of Pleistocene hunter-gatherers to the globally interconnected world of the present. Becoming posthuman, then, is a matter of acquiring a technologically enabled capacity for agency outside of that assemblage.
Importantly, DT is mechanism-independent. It doesn’t specify how posthumans would acquire the capacity for living outside the human, or what they would be like. Her narrates one rather benign version of Vinge’s technological singularity. But post-singularity beings would not need to “transcend matter” to work outside WH (that might not be possible if materialism is true!). They might fashion some weird new ecological niche in our space-time. Thus instead of the bittersweet ending of Her, we might face an almost unimaginable ontological catastrophe: life in a world ordered by minds many orders of magnitude vaster than ours.
Of course, sophisticated artificial intelligence like Samantha may not be programmable on digital computers. She and other science fiction AI avatars, like Skynet, may be no more feasible than Angels, or supernatural Dark Lords like Sauron from Lord of the Rings.
I take it that DT captures a core concern with these disparate scenarios: that an entity which starts out as part of the human world, whether it is biologically human or a device used by humans, becomes able to “withdraw” from that human world and discover purposes of its own.
DT expresses the source of our moral concern with all posthuman itineraries while honouring our current epistemic situation with regard to posthuman. There are no posthumans. Not only do we not know whether posthumans are possible, we do not know what kinds of feasible techniques or processes could bring them about. The DT does not affect such knowledge, thus it avoids fixating on favourite science fictional tropes. This is why I multiplied disparate examples of conceivable posthuman in PL. None of these is intended to be prognostic. PL does not predict anything – least of all, the shape of Posthuman Life.
And this really brings us to core of what you call Speculative Posthumanism in contradistinction to Critical Posthumanism and Transhumanism, the degree to which you think it can bring us to the ‘conceptual brink.’ Rather than conceiving Disconnection on the basis of identity criteria, you conceive it on the basis of a literal, physical disconnection from the instrumental ecology of humanity. Now I think this idea is brilliant because, as a skeptic, I have deep appreciation of the intractable nature of conceptual definition. If the criteria of Disconnection are ecological, then you can actually have a fact of Disconnection, rather than just a journal.
I suppose what I don’t understand is why you felt you needed so much explicit ontology to get you there. Is ‘flatness’ really so discursively expensive? There’s the basic problem of expanding your base of commitments: from a skeptical mindset, ontological claims do not so much justify as compound the sum of what needs justification. And there’s the tension with your own epistemological critique of phenomenology: if critical ontology is unreliable, why should dogmatic ontology (a la Deleuze and Guattari) fare any better? Why not adopt the ‘flat-footed realism’ you find in many analytic circles, where you begin with, ‘whatever it is science is revealing’ and as far as possible leave it at that?
It is entirely possible that the argument of Posthuman Life could be finessed with fewer hostages to fortune: e.g. without a postructuralist event-ontology of the Derrida/Deleuze/Lyotard variety. Still, I think this discourse can be justified from the within the ontology that underlies the Disconnection Thesis. It also plugs into a way of thinking about situations where the constitutive conditions for judgement are at issue.
Let’s think about what the DT commits us to. First (this is the bottom line) DT commits us to stuff that can be connected or disconnected. I take it that this commits us to entities that can satisfy relations but are not constituted by them. So whatever satisfies DT it can’t be what DeLanda calls a “totality”. In a totality, every part is constituted by its relationship to every other part. Stuff can’t get disconnected and play elsewhere in a totality. So totalities are a non-starter.
“Assemblage” is just a fancy name for a thing whose important bits can plug and play elsewhere. One of the features of assemblages that make them useful for articulating DT is that they have parts that can plug into new assemblages, altering the properties of both. Related to this is the idea that the dispositions of assemblages supervene on the dispositions and capacities of their parts and can do so in ways that make the consequences of changes in those parts hard to predict – especially where these are historically novel. As I argue in the book, this does not commit us to more than weak emergence. The dynamics of a system is weakly emergent if it cannot be predicted short of running a simulation of that system from initial conditions. It seems reasonable to suppose that disconnections will exhibit weak emergence because of their historical novelty and complexity.
DT seems to commit us to realism about functions. The connections which are liable to be lost by a new posthuman are functional relationships or roles not, for example, spatial inclusion relations.
Finally (I argue) DT commits us to agents. If we cannot say that posthumans would exhibit agency, I don’t see how we distinguish them from other technological bits and pieces that lose their human-dependent functional roles. I try to qualify agency and functions in non-anthropocentric ways. For example, the specific account of agency that serves the DT in the book is psychology-free and thus makes no commitments about what posthuman minds might be like.
Assemblages, weak emergence, real functions and agency do not seem particularly extravagant as ontological commitments go.
Given these commitments, a postructuralist event ontology seems motivated at least. The entities involved in a disconnection would be in the process of losing human-dependent functions and slotting into historically novel nonhuman assemblages. Thus a disconnection is liable to involve derangements in the way life and mind are embodied and organised. To me, it seems natural to think of it as a process whose properties and meanings would be obscure and indeterminate for the duration of its unfolding. Ignoring some nuances between these writers for a moment, this is the concept of an “event” that I take from Derrida, Deleuze and Lyotard. Events, in this sense, have a problematic politics arising from their overturning the conditions under which our practices seem sensible and reliable. This also feeds naturally into the politics of posthumanism that I tackle in the final chapter.
“Posthumans, there are none” (124), you write, meaning that the question of the posthuman is a question that can only be answered so far. One of the things that makes Posthuman Life exemplary is the degree to which it takes our inability to answer very many questions regarding the posthuman seriously. You do a damn good job cleaning the whiteboard of Posthuman Possibility Space (PPS)! But there’s a sense in which you view posthumanity as necessarily being a technological instead of an epistemological event. Implicit in your critique of phenomenological and normativist attempts to delimit PPS is the charge that the human, as they define it, is largely an epistemological artifact. But what if this were wholly the case, if the ‘human’ were simply an epistemological category, a way of knowing something not ‘human’ at all? What if, in other words, some kind of global eliminativism turned out to be true?
This raises the spectre of a far more exhaustive ‘critical posthumanism’ than that arising out of the (thoroughly anthropocentric) poststructuralist tradition, and so resurrects the charge that a residual humanism animates your account. If ‘human’ is simply an interpretative crutch we lean on whenever confronted by various patterns of behaviour–think of Samantha from Her, again–doesn’t ‘posthuman’ simply refer to the obsolescence of that interpretative paradigm? One might imagine, for instance, an ‘Intelligence Possibility Space,’ one where canines, homo sapiens, and Matrioshka brains simply occupy different and overlapping regions.
I think my account is more than residually humanist. It is humanist in a sense that is specified early on: I assume that humans are interestingly distinct from nonhumans. This goes together with the idea that there are, indeed, human-distinctive powers and dispositions on which the Wide Human depends for its continued existence. I don’t think this claim commits me to essentialism in that a set of powers and features may distinguish human populations from nonhuman populations without any of these being necessary for membership of those populations.
This obviously does not commit me to what I call “transcendental humanism” – the claim that we can only make sense of a world as a correlate of human thought. Indeed, the claim that humans are distinctive need not imply any form of anthropocentrism (presumably, cats are similarly distinctive). This being said, I acknowledge that my characterization of the posthuman is human-relative. The disconnection thesis describes the posthuman in terms of the capacity of posthumans cut free from the Wide Human (p. 167).
This seems acceptable given our moral interest in the possibility of feral technology. However, it is interesting to consider whether the eliminativism you describe could render this characterization moot.
I suppose that one of the things you have in mind is that we might be radically mistaken in our self-characterization as beings capable of semantically evaluable states or acts, or autonomous action, etc. I haven’t argued for this claim, though I’ve accept that successful explanations for our semantic or moral powers may not invoke anything semantic or “moral”. I’m more interested, frankly, in idea that our characterization of subjectivity or the “space of reasons” may be massively parochial. For example, the discussion of hyperplasticity in section 4.3 tentatively broaches the possibility that there may be agents for which belief-desire attributions may not be useful (See discussion of post-intentional agents below).
So the “human” may be an interpretative crutch in the sense that certain practices of interpretation may not work well outside the portions of PPS occupied by human folk. It would be obsolescent given the elimination of the folk. It might prove useless as a means of finessing the interpretation of certain kinds of posthuman agent. But these seem to be different kind of eliminativist scenarios to the one you envisage here.
Parochialism is the more epistemically modest way to ‘unbound’ Posthuman Possibility Space than eliminativism, I agree. But isn’t there a real question of how PPS might fit into an Intelligence Possibility Space like the one described above? If you refuse to concede any occult efficacy to the ‘semantic,’ if you flatten your ontology (via a global eliminativism or not), then the interesting differences between humans and nonhumans are natural differences admitting mechanistic explanation (and technical manipulation). Any advanced technological intelligence could be mapped relative all possible intelligences, biological and/or technological, and not simply the human. Your position is anthropocentric at least insofar as it defines a region of this space relative to a single point in this space–a kind of ‘posthuman possibility cone.’ What kind of questions can be asked and answered by relativizing the issue of technological intelligence in this way that cannot be otherwise?
For sure it is possible to posit an abstract “Intelligence probability space”. If we understand intelligence as an index of a system’s ability to garner rewards over a range of varyingly complex environments, then its natural to suppose that there is an ordering of such notional agents, some of which may have greater intelligence than any human (Legg and Hutter 2007). This seems reasonable given a measure of intelligence that abstracts appropriately from the internal workings of agents or the facts about their actual as opposed to counterfactual environments. This is all fine. However, the concerns of the book are narrower and – in the qualified sense I’ve used – anthropocentric. I’m interested not in all notional agents, but the set that might be feasible consequences of some iteration of human technical activity.
I concede that maybe this conception of intelligence could be plugged into my conception of the “power” of a life form without appreciative loss of scope. Legg and Hutter’s formula for Universal intelligence is maybe equivalent to the Deleuzo-Guattarrian concept of deterritorialisation – which could henceforth be dropped (Legg and Hutter 2007: 415).* That’s fine by me since, as you imply, there’s much to be said for purging idioms from philosophical posthumanism. But we’re still left with the problem that not all the ethical and hermeneutic issues that arise for posthumans are related to intelligence and that a conception of the posthuman should (in my view) incorporate a historical succession relationship – which is not an issue in possibility space defined purely by orders of universal intelligence. Differences in phenomenology or ecology need to be considered. Certain, issues – such as hyperplasticity – may fall out of differences in intelligence (hyperplasticity may be a kind of hyperintelligence!) but if we regard them purely in terms of intelligence we may be missing out other crucial dimensions (e.g. the possible lack of robustness of our folk practices of interpretation in cases where it does not pay a creature to represent itself as having value or beliefs).
(*Legg, S. & M. Hutter 2007. “Universal Intelligence: A Definition of Machine Intelligence”. Minds and Machines 17(4): 391–444.)
So you’re after the objectivity of the perspectival relations humans will almost certainly enjoin given anthropologically unbounded Posthuman Possibility Space. And that’s hard core cool, because it allows you to pose perspectival questions in epistemically modest ways, which in turn makes Posthuman Life an excellent roadmap of what you might call the ‘credibility horizon’ of posthuman speculation. It would be hard to argue this is a ‘bad’ anthropocentrism. But the spectre of anthropocentrism also rears its hoary head in your attempt to provide an objective account of function and teleology. This where I had some problems. So you argue, for instance, that conceiving functions as heuristic ultimately “implies that biology is not really a science at all” (p. 132). But the heuristic status of function only cuts against the cognitive legitimacy of biology in general if one assumes the view that heuristics cannot but incur performance trade-offs, when in fact, many heuristics are as accurate or moreso when compared to more information intensive methods (See, for instance, Todd and Gigerenzer’s Ecological Rationality). If the use of heuristics doesn’t cut against the legitimacy of biology, then you need not set foot or stake claims in the philosophy-of-function jungle at all.
Or does Disconnection require the objectivity of function and teleology in other ways as well?
I’m open to the possibility of an interpretation of the Disconnection Thesis that does not require us to think of posthumans as agents that can enlist values and accrue functions. I just can’t see how it would work. I needed to interpret DT so that its ontological scope is not so wide as to make the existence of posthumans trivial (i.e. hulks and ruins are posthumans) or so narrow as to make it applicable only to posthuman descendants that are relevantly like human agents. However, your question seems to imply that we should adopt an anti-realist attitude to functional explanations but not (say) to other kinds of explanation. But the objections need to be stated. Should functional explanation be eschewed while mechanistic explanation is retained? There are explanations in fundamental science that do not advert to mechanisms – in quantum theory and thermodynamics for example – so we need to be clear about why we want to privilege one idiom while granting another heuristic status.
In Chapter 6 I argue for the view that the relevant notion of function here is a process-dependence relation on self-maintaining systems – where self-maintenance is a particular way in which systems manage not to fall apart immediately (cohesion). Admitted, there are lots of lacunae here. For example, my distinction between self-maintaining systems and other kinds of system is phenomenological and thus susceptible of elimination, reinterpretation or revision. The kind of teleology I’m committed to is weak – objective but relational – and (given time) I would like to say more about how this biologically inspired notion relates to the social function of artifacts, say.
I guess, this relates to a methodological point I don’t highlight in the book. A lot of the time I’m in the game of spelling out some fairly abstract constraints on this or that, then seeing how these constraints can be realised. So thinking of becoming posthuman as cutting out of the Wide Human is a way of interpreting the property of having “ceased to be human” used in my still more schematic formulation of SP. But that’s no reason others should make the realisation-choices I make. If someone were willing to spell out SP or DT in ways that were ontologically less question begging than my formulation, I’d be delighted.
What this all underscores is the devilish difficulty of the task you have set yourself. How can humans like you and me possibly cognize and thus evaluate the nature of a relation to a kind of radically different systems we have yet to become? Given the arbitrary and intractable nature of conceptual delimitations of the ‘human,’ you adopt a flat, particularist ontology that lets you define the posthuman in terms of Disconnection, which is to say, a future historical event. You clear away pretty much all the anthropocentric conceits that presently fog the debate, affirming the possibility of radically alternate psychologies and phenomenologies. In other words, you take posthuman alterity as your primary dialectical constraint, doing everything you can to avoid anthropomorphising our future selves.
I’ve been suggesting that some kind of global eliminativism will give you this on the cheap, flatten the intentional braille that dimples this or that point of your ontology. But your project is also an ethico-political one. You want to know how we can overcome what you call the ‘posthuman impasse’ ((p. 106), our inability to ethically account for or ethically discount the posthuman. The former seems impossible given anthropologically unbounded PPS, while the latter seems irresponsible. I know my solution is to strip naked and wear a sandwich-board saying, ‘The End is Nigh!’ How do you see your approach navigating this impasse? How does it strike you as optimistic and/or as pessimistic?
Just what do you have against wandering naked wearing sandwich-boards, anyway?
You formulate the posthuman impasse very well, Scott. However I think it’s worth bearing in mind that this statement of the impasse is provisional and tactical. It assumes that the ethical problem raised by speculative posthumanism is expressible in terms of an ethics of obligation – one that becomes more excruciating if, as I argue, the existence of posthumans is necessary for substantive understanding of them. I chip away at this version of the impasse in a number of ways.
Firstly, and most importantly, whence this moral interest in the posthuman given that realising it could be of limited prognostic value? I argue that our direct moral interest is in remaining an agent under historical conditions of ramifying technical change that might, just might, become disconnection potent.
If we drop anthropocentric ethical constraints, this implies an ethic of sustainable agency (not necessarily human). It also implies a contingent commitment to technological hypermodernity as a condition of sustainability. I think this requires an experimental politics and ethics that is also a kind of art – a cross-disciplinary search for sustainable niches in posthuman possibility space.
So sandwich boards and/or nudity might be involved, but also new and intensively inhuman bodies, subjects and passions.
I think this argument has certain advantages over competitor accounts. It provides a justification for technical augmentation that does not rely on parochial humanist assumptions (as does Transhumanism). Moreover, unlike technophile Marxist theories such as Accelerationism, it does not assume that the goal of our explorations should be universal and collective. There may be divergent agents, multiple and disparate collectivities and ecologies. In this regard, I think SF writers like Bruce Sterling, Stross, Rajanieimi and yourself have trumped the philosophers. Disconnection is the disestablishment of the “we”, of empires and republics (galactic or otherwise).
This should not mean the rejection of collective action or a nasty fetishism of technological power; nor should it imply contempt for bodies or fragility – vulnerability is arguably the thing we will continue to share with our posthuman descendants. But an unbounded posthumanism implies we cannot assume that our explorations of posthuman possibility will converge on common forms of life. The political logic of disconnection is, as Ballard might say, “perverse” but not affectless.
So the conclusion of my book is not apocalyptic but affirmative. It is a call to experimentation in living, feeling and being which recognises only hypermodernity as its ineluctable condition. The future belongs to no one.
Other than our children! Which brings me to the last potentially problematic anthropocentric dimple in your ontology, the question of ethical value. Consider the following:
“Guilt,” he remarked in prison. “It’s this mechanism we use to control people. It’s an illusion. It’s a kind of social control mechanism—and it’s very unhealthy. It does terrible things to our bodies. And there are much better ways to control our behavior than that rather extraordinary use of guilt.” Ted Bundy (as quoted in Robert Hare’s Without Conscience, p 41)
Here’s why I hew to the sandwich-board (or pessimistic) alternative: I think the posthuman will be post ethical as well as post psychological and post phenomenological. The technical ability to rewrite biological humanity into presently unimaginable forms means the technical ability to rewrite ethics into presently unimaginable forms. I actually think neurocosmetic surgery, when it becomes a reality, will actually begin with tweaking social emotions like guilt, simply because of the misery they cause so many people. How does one argue for an ethics of experimenting with the end of ethics as we know it?
To be sure, a utopian paean to the openness of history (“No one owns the future”) can also be read as a niggardly denial of the same. My bad! But I don’t think you’ve offered me a good reason to believe that the posthuman is post-ethical. The obvious way to argue this (taken by epigones of Sellars, like Brassier) is to take the Kantian line that only a rational subject can make anything matter. So where posthumans are not persons, there is no mattering, no ethics. Daryl Wennemann seems to contemplate this possibility in his Posthuman Personhood only, to reject it as the very devil (“If there is a an historical movement to supersede [moral personhood] I must set myself against that movement” ).
My problem here is that this implies that there are just two possible positions in phenomenological snakes and ladders: Kantian persons answerable to communally expressible reasons and merely sentient beings incapable of moral agency, like Brandom’s parrot (PL, 181). But, if I’m right, we have no warrant for this dualism; we lack a priori knowledge of PPS that might support it.
Your position seems different. If the Bundy quote is intended to motivate the view that ethics is a kind of social engineering, then it’s not clear why this should make posthuman successor technologies less rather than more attractive. But suppose we concede that posthumans will be self-maintaining agents of some kind (as I argue in Chapter 6). Then the conditions for self-maintenance (or so I claim) will be values for such beings, their form of life (whatever that turns out to be) will enlist states of the world (including their internal states) as values. Values are relational or perspectival, on this view, but no less real for that. And if this is right, we are committed to a pluralist and realist ethics that need not be bounded by the human at all.
This leaves the nature of a posthuman ethical subjectivity or life open (utopian moment) and still to be constructed. But it does not foreclose it. Ethics is still waiting to be made.
The idea is to raise the problem of conceiving the possibility of an ethics without guilt, or shame, or (to reach deeper still) responsibility. The threat is that the ‘ethics’ you refer to really isn’t ethics at all. In which case it becomes difficult to understand just what it is you’re talking about when you refer to ‘posthuman ethics.’ After-all, it could be the case that disconnection will eclipse value even in your minimalistic sense, that’ convergent, capacity-maximizing predispositions’ (or some such) will be what drives the posthuman, and that ‘good’ will be nothing more than an absurd atavism. If it turns that you are wrong, that ethics is anthropocentric all the way down–that ‘ethics is actually waiting to be unmade‘–what should our ethical relation to the posthuman be then?
And perhaps more importantly, how does the uncertainty intrinsic to the speculative nature of your account bear on the question of how we should evaluate posthuman possibility?
Maybe our feelings of guilt, shame and resentment (self and other-directed “reactive attitudes”) fix the satisfaction conditions for attributions of responsibility. Since these are social affections, they would not furnish such conditions for non-social posthumans, even if they had the biological lineage to experience something like those feelings.
I can’t see a post-social existence of this kind as appealing or ethically salient in my most frenziedly misanthropic moments. Maybe, there are ethical governors that can do just as good a job as shame or resentment but which entail a different (less egregious) phenomenology and aetiology. These are legitimate speculations. The place of the emotions in our ethical economy have been salient topics of philosophical discussion since the Stoics. If we agree that our ethical phenomenology is striated with darkness, the only way to determine whether such alternative forms of life are accessible is constructive and experimental. I suspect I would not behave as well to others if it weren’t for the promptings of shame and guilt. But I don’t know. Maybe posthuman = post-ethical. And maybe we should consider this prospect seriously lest we slide into the pit. But I just want to come back to the epistemic position that I’ve stressed over and over, and on which we agree, I think.
You subtitle Posthuman Life, ‘philosophy at the edge of the human.’ Is it fair to say that you’re also talking about ‘humanity at the edge of the philosophical’? If only our ignorance of ourselves has prevented us from fundamentally transforming ourselves, one could say that traditional philosophy is likewise an artifact of ignorance, something about to be swept away. So let me push you over the edge: What might a posthuman philosophy look like?
What the hell did I mean appending this subtitle?
One way of reading the term, of course, is through the lens of transcendental thinking. To do philosophy at the edge of the human, in this sense, then, would be to consider whether there could be something “wholly other” that does conform to anthropologically bounding a priori conditions. However, in the book I strenuously resist understanding the encounter with the posthuman in terms of radical heterology or negative theology.
So the “edge” in question is not the transcendental precipice beyond which noumena skitter and gleam. The edge must therefore be contingent and historically alterable. Yet it need be no less real for all that.
For example: suppose pragmatists like Dennett and Brandom are correct in seeing our ability to predict and understand agents in terms of beliefs and desires as a discursively mediated social skill. I argue that this is consistent with there being kinds of agent for which belief-desire psychology secures no robust generalizations about their future behaviour. Suppose that feasible posthumans fall into this category [See discussion of “hyperplasticity” above]. Assuming, again, that Dennett’s dictum that failure of imagination is not an insight into necessity, it does not follow that such “post-intentional agents” would be wholly unintelligible. Presumably, they would be legible to themselves in some form. But they might not be legible for creatures with “our” discursive mind-reading habits.
Now, it’s interesting to consider whether these same habits are historically constitutive of philosophy. Sophisticated non-linguistic mentality may be possible, but would philosophy be possible without discourse? Your question implies that were we to achieve an understanding of our cognitive mechanisms that obviated intentional discourse, then intentionality, theory of knowledge and all the moral baggage of subjectivity might just cease to matter. As you argue in your discussions of Blind Brain Theory, intentional phenomena may simply be artifacts of our ignorance of the dynamic processes that underlie our cognition and action.
This may not matter for creatures like us because our intentional practices work tolerably well even if there is no explanation for this efficacy that does not appeal to these practices. But the metaphysics of posthumanity seems to allow for a “semantic apocalypse” – the point beyond which science might lead us to slough off intentional discourse in favour of some successor regime.
I’ve also argued for a highly abstract formulation of the posthuman for which these post-intentional entities are just one among many conceivable instances. We do not know whether they are feasible in our world. I think the only thing that we can say for certain regarding the posthuman is that, if and where it emerges, it will surprise us. Given the dated non-existence of posthumans, we’re not in a position to rule out the possibility of a posthuman philosophy, or a posthuman politics for that matter.
Clearly, there is a philosophy of disconnection. But could there be disconnecting philosophies – systems of thought with a capacity to induce or sustain disconnections?
Well, the end of the book I programmatically suggest that there may be forms of Interdisciplinary thinking suited for evaluating and exploring adjacent regions in Posthuman Possibility Space. Maybe there is a way of drawing on current practices – in art, philosophy, engineering – that scale up to a method for devising and exploring “sustainable” modes of disconnection-potency. My hunch is that technological forms of art like bio-art and computer music are models for such a post-human politics because, in Stelarc’s words, they potentiate “alternative functions and forms”. So a disconnecting philosophy might be a kind of aesthetic procedure for departing from the Wide Human without falling apart (sustainability again) a laboratory of subtle derangements. But I don’t know yet whether this aesthetic of disconnection can be formulated systematically or whether it is just a terror reflex. Maybe, as you suggest, what faces us is just a limit before which such avant-garde gestures are pissy distractions.
To advocate political change is to presume the possibility of a certain future. The more tenuous those presumptions become, the more arbitrary our political advocacy becomes. The more arbitrary our advocacy, the less compelling our case for change. The question of the posthuman is as much a question about the possibility of politics (in the shadow of the posthuman) as it is simply another political question. Do you think the crisis has already arrived? To the extent that Posthuman Life eschews the comfort of our traditional anthropological conceits–in particular, the presumption of some future-proof phenomenology or normativity–would you say it challenges the very possibility of political theory?
The phenomenology of normativity supposes a shared world in which normative statuses can be arbitrated. Humans can share practices and norms because they are able to attend to similar things– for example, common sensitivities to time, to objects, to feelings, to facial expressions.
We might like to imagine that the future will be dominated by an enlightened republic (like Iain Bank’s Culture) in which humans and posthumans cooperate for the common good. But this might require a coordinated sensibility that is impossible for humans, so long as they remain human, and for posthumans, so long as they remain posthuman. A modest example of this is staged in the 2004 reboot of Battlestar Galactica. The humans (colonials) have state and economic institutions along familiar Western liberal lines. Their humanoid robot offspring, the Cylons, do not have a state or an exchange economy because their technology has made institutions such as money, health or education systems unnecessary. Thus a human-Cylon republic is impossible until the latter relinquish their technological distinctiveness and become, functionally, human.
What this suggests, is that a posthuman politics which projects our norms onto the indeterminate future may run up against differences that make the sharing of values or meanings practically burdensome, or at the limit, impossible. So you’re right, we cannot frame a posthuman politics in terms of current norms unless we’re prepared to stand by some strong anthropocentric assumptions. This just won’t wash if Anthropologically Unbounded Posthumanism holds.
But (and what follows is even more speculative than SP) it may be possible to conceive a posthuman politics that generates its means of prescription. Such a politics discovers its constraints and values by bootstrapping its own transcendental aesthetic – e.g. new forms of embodiment or phenomenology.
Here’s a speculative example of how such a politics might work: Brain Machine Interfaces (BMI’s) have come quite far. They can allow tetraplegics to control devices by decoding intentions expressed as cortical EEG signals. The use of microelectrode arrays as recorders and stimulators has even allowed the sharing of learned sensorimotor skills between animal subjects over an Internet connection. So this seems to be fertile technology that looks to be refined over the coming decades. DARPA (The US Defence Research Projects Agency) has recently proposed the development of a “cortical modem” that would allow visual information to be fed directly to the sensory regions of the brain, providing a kind of augmented reality overlay without the need for clunky VR glasses. The device would utilize optogenetics (a technology that genetically alters neurons in the user’s brain to express light-sensitive proteins that control their firing behaviour).
Even if this doesn’t herald your semantic apocalypse, the effects of this intimate technology on the circulation of culture could be similarly “disconnecting”. For example, far from escaping the meat, neurocultural artifacts might be more immersive, less easy to distinguish from the bodily and neural activity they precipitate. Indeed, the distinction between sign and signifier, text and interpretation, might become unworkable in a neuroculture.
If this technology catches fire we might see the widespread adoption that characterised personal computing technologies. But its adoption could also bifurcate drastically. It’s just not clear how prepared most of us are to relinquish a culture of texts and artifacts in favour of a network of circulating processes whose phenomenology and agency might appear deeply insidious from the outside.
Suppose a neuroculture goes live at some point via some efficient peer-to-peer communication network. Early adopters might employ it as an alternative to external visual interfaces like screens or pages. But it’s not clear that it would end here. Some might use it as a means of modulating moods, emotions or pooling executive functions in the brain. These Borg-like forms of collective agency and affect might be unavailable – or even readily intelligible – outside the network. This bifurcation of technologies would also be a bifurcation of shared worlds. To acquire practical competence in the neuroculture, human interpreters would have to allow their central nervous systems to be hacked.
Reliable information about the neuroculture would thus pre-empt deliberation over its merits. The politics of disconnection is thus ass-backwards. Intelligent democratic deliberation would have to cultivate technological derangements or (as before) stand by an arbitrary anthropocentrism. Cognitive pollution (as you put it) is not just an unhappy concomitant of possible disconnection-potent technologies, like AI. It is a condition for posthuman politics
In Posthuman Life I argue that this self-generated uncertainty incentivises experimentation with disconnection-potent technologies in hypermodern technological systems. The problem posed by such systems to any agent within it is how to sustains agency as the conditions on which agency depends undergo rapid, uncontrollable change (See Chapter 7 in PHL).
Ramping up capacities for technological self-fashioning seems to be the response most liable to maintain agency in a disconnection-potent system. Yet this also contributes to the general derangement, incentivizing others to do the same in a self-augmenting feedback loop. So our dependence on systems whose long-term development are out of our control will probably incline us to ramp up the potential for disconnection (See Chapters 7 and 8 in PHL).
A propensity for self-technical fashioning is consequently not a normative demand of modern technological systems, but an existential and ecological requirement. Normative political theory is arguably impotent here because disconnection pre-empts formal or settled models of citizenship, justice and deliberation and identity. Nonetheless we would need to evaluate the alternate prospects for posthuman becoming that it would throw at us.
Assuming anthropologically unbounded posthumanism, the only way to evaluate these potentials is by engineering ourselves into them. We figure out whether it is worth being posthuman by becoming posthuman – or, more tentatively, sampling posthuman possibilities. So our “posthuman predicament” forces us to the sustainable potentiation of alternate functions and forms discussed previously.
The sustainability question is aesthetic, institutional and economic, as well as technological and corporeal. It is aesthetic because art is about generating constraints as well as things. It provides a working model of individual or collective action that pre-empts its own standards of assessment. Institutionally, what forms of association are robust enough to sustain agency in adjoining regions PPS and flexible enough to adapt to their exigencies? Economically, what resources are necessary for exploring PPS; how are they to be distributed?
I don’t have any good answers to these questions, but their enunciation affirms that our posthuman predicament is not post-political. Or not yet, anyway.
© Excerpts and links may be used provided that full and clear credit is given to R. Scott Bakker
and Figure/Ground with appropriate and specific direction to the original content.
Bakker, R. S. (2015). “Interview with David Roden,” Figure/Ground. June 6th.
Questions? Contact Laureano Ralón at firstname.lastname@example.org