Volume XIII, Number 2, Fall 2017


"Humans, Machines and the Screen of the Anthropocene" by Dávid Levente Palatinus

David Levente Palatinus PhD is Lecturer in Media and Cultural Studies at the University of Ruzomberok. His research moves between and across visual studies, digital media, and cultural theory. He has worked and written on violence in serial culture, medicine and autopsy, autoimmunity and war, and digital subjectivity. He is co-editor of the ECREA section of Critical Studies in Television Online, and sits on the editorial board of Rewind: British and American Studies Series of Aras Edizioni (Fano, Italy). He is currently working on a book-length project called “Spectres of Medicine: The Ethos of Contemporary Medical Dramas.” Email: david.palatinus@ku.sk

The opening credits of Westworld (HBO, 2016-), while offering a plethora of compelling images about creation, or to be more precise, the simulation of it via the construction of beings that appear to be alive, shifts focus from the dominantly anthropocentric connotations of ‘life’ to the machinic, hybridinal, created, algorithmic, artificial, and mediated dimensions of existence. We see highly stylized images of robotic tools 3D-printing animate and inanimate bodies caught up in a flow of inter-actions. The juxtaposition of a piano string and the sinew in the hind leg of a horse, bone structures and the metallic exoskeletal components of machinery, or the images of a (presumably artificial) iris morphing into the (equally artificial) landscape of the Wild West the eye beholds suggest the blurring of the boundaries between animate and inanimate, organic and inorganic, physiological and artificial.

With them, the sequence also anticipates the breaking-down of dominant epistemic models on which the demarcation of reality and its re-presentations were predicated, calling into question both modernity’s hierarchical ordering of origin and replica (cf. Benjamin, 1936) as well as the post-modern de-demarcation of the categories of surfaces, images and representations (Braidotti, 2017a). What follows instead is the depiction of a world predicated on the ‘code-ification’ of existence – a post-human predicament where the algorithmic logic of data and the interchangeability of DNA and binary code play a central role. As the sequence continues, a pair of artificial skeletal hands plays a piano which eventually becomes the autonomous machinery of a player-piano running on a perforated scroll carrying an early algorithm in the form of (analog) code. A couple embraces in intercourse but it is revealed that they are also not human. All bodies are colorless, displaying a wax-like complexion, with faces and torsos half complete and half still revealing what reminds of anatomical structures under the skin.

This panopticon of robotics and automation is on par with the gallery scene of Blade Runner 2049 (2017, dir. Denis Villeneuve) where KD6 (Ryan Gosling) is lead to the data archive through a gallery displaying replicant body parts in glass boxes, or with the wall of faces and the repository of android body parts in Ex Machina (2014, dir. Alex Garland) where, at the end of the film, Ava (Alicia Vikander) takes items to fix herself and take on a full human appearance. From the player piano, as a classic example of modern automata, to the humanoid bodies cooked up in artificial amniotic fluid, to robots building human and nonhuman body parts, to the replicants’ ability to procreate, to Ava’s ability to use her nondescript sexuality to manipulate Caleb (Domhnall Gleeson), the symbolic quality of bio-mechanical / cyborgic bodies indicates not only the collapse of systems of representation, in line with the ways Baudrillard describes the shift from representation to simulation, but also the conflation of organic and inorganic forms, the advent of a new materialism that attributes intelligence to matter (Deleuze and Guattari 1987). One of Westworld’s central questions, therefore, has to do with the demarcation of existence vs subsistence, of living and functioning, of (organic) life and the (machinic, artificial) simulation of life.

The sequence ends with an incomplete human(oid) body fixed to a circular metallic frame, an obvious recreation of da Vinci’s Vitruvian Man, being submerged in a white translucent liquid. This iconic image has become a signature symbol of not only the show, but also of post-human discourses by proxy. Leonardo’s emblem has long been regarded as the quintessential representation of ‘human’ as an aesthetic and moral being and its relation to nature and all non-human beings. The Vitruvian Man encapsulates humanism as “a doctrine that combines the biological, discursive and moral expansions of human capabilities into an idea of teleologically ordained, rational progress” (Braidotti 2013, 21). Reason, together with consciousness and sentience, thus become the key elements and conditions of (human) subjectivity, as later famously propagated through the logic of ‘cogito.’ In contrast, Westworld’s ‘Vitruvian Android’ indicates the surpassing of anthropocentrism: the image conflating the human and the non-human anticipates the collapse of systems of demarcation and anthropocentric logic, and points to a discourse that emphasizes the de-naturalization of subjectivity, the disembodying and re-embodying of consciousness, cognition, sentience – and with them, of agency.


 



Westworld’s ‘Vitruvian Android’


 


This paper will argue that in science fiction television the problems of consciousness and sentience emerge as pivotal to the representation of not only the emancipatory politics connecting human and non-human species, but also to the mediation (construction and circulation) of anxieties that surround such politics. I will argue that this duality is best understood if situated within the context of the Anthropocene, the epoch we live in and in which humans not only have positioned themselves as the dominant species but also have become an ecological factor exerting their impact on a planetary level. Cultural ideas about artificial intelligence provide an excellent starting point for the understanding of the intricate relation between the post-human condition and the Anthropocene, especially in relation to the negotiation and symbolization of the future as part of human history.

I

When trying to negotiate the interrelation between post-human nostalgias and the screen of the Anthropocene, one faces a tripartite complexity. The purpose here is to examine the ways in which they interconnect and co-evolve in a history proper to an epoch that purposefully suspends the demarcation of reality and simulation. First, there is the problem of the Anthropocene itself – as a concept, as an epoch, and as a common critical currency, or as Bonneuil and Fressoz argue, a critical rallying point for various disciplines from geology, anthropology, philosophy, political science, history, etc. (2016, 5). As the growing body of diverse literature on the subject indicates, the ‘Anthropocene’ over the past decade has grown beyond the initial scope of a concept denoting an epoch characterized by the “human dominance of biological, chemical and geological processes on Earth” (Crutzen and Schwagerl 2011). Expanding the commonly accepted critical stance that human presence has become a force shaping both organic and inorganic matter – from species to climate to the transformation of eco-systems – the Anthropocene has become paramount in critical thinking, with implications for ecology, economy, politics, technology, history, and identity. The concept now denotes a range of interconnected issues which are sometimes challenging to consolidate.

Consequently, to reposition human-non-human connections, it is essential to analyze how the underlying discourses about self, society, development, habitation, domination and responsibility are produced and mediated in our present historic context, and how these mediations challenge and shape our understanding of ecological/social reality.

In cultural theory and media studies, recent literature on the conceptualization of the future of the human suggests that we consider humans, animals and machines as co-evolutive species (Zylinksa 2014, Sloterdijk 2012) inhabiting the same ecosystem (Parikka 2014). At the same time, cultural ideas about the Anthropocene are frequently aligned with a post-human future that re-imagines these species in a world after a cataclysmic event (Bonneuil and Fressoz 2015).

Such a context implies not only post-apocalyptic scenarios like a pandemic outbreak (Helix, Syfy 2014-5, 12 Monkeys, Syfy, 2015-), and environmental or nuclear disaster (Oblivion (2013, dir. Joseph Kosinski ), After Earth (2013, dir. M. Night Shyamalan), the Hunger Games films (2012-15), The 100 (CW, 2014-), or a technological singularity (The Matrix (1999-2003, dir. Lana and Lilly Wachowski ), Almost Human (Fox, 2013-14), Extant (CBS, 2014-15)), but, more generally, a shift or a change in the global ecosystem that necessitates a radical repositioning of the human. Such discourses operate under the assumption that we are already within a(n ecological) catastrophe. The underlying human experience (human condition – Heidegger) of the sense of an ending, or as Zizek puts it, the anxiety of living in the end-times (2011) is prompting us to reconsider what human is, and what it means to dwell in the Anthropocene.

My concern here, however, is not only the postulation of possible future. What the programs listed above have in common, among other things, is that they all revolve around a core narrative of survival, either in the face of inevitable extinction or in the prospect of being outpaced and out-dominated by non-human existence. What is at stake in most such narratives is humans’ ability to transcend their limitations, and, ultimately, their own humanity. These narratives suggest that human-ity resides in our ability to survive, often against all odds, by transgressing the physical and psychological boundaries of survivability, by reinstating (and rehabilitating) the functions of human society. Of course, the politico-ethical aspect of such narratives of survival attest not only to our present but also, perhaps even more markedly, to our history as a species, to our adaptability and resilience, – exactly the factors that, among other things, turned us into an ecological factor that is now responsible for creating the very conditions it needs to learn to adapt to.

But instead of these spaces of survival, demarcated by the Zizekian nihilism of end-times and the anthropocentric imperatives of survival narratives, what I am interested in is what television programs like Westworld, Humans (Channel 4, 2015-), Extant, and films like Ex Machina or Blade Runner 2049 shift our attention to: the present condition of mediated and hyper-mediated, algorithmic, networked, data-driven, gamified and immersive reality that is reflected and emulated in symbolic forms through science-fiction genres. These renditions have more to do with inter-species politics, or what Braidotti calls “affirmative ethics” (2017), meaning humans negotiating ways of co-inhabiting a changed environment with non-human species in the present, rather than with mere future projections of (the apocalyptic loss of) prevalence, domination and control. For instance, the ‘awakening’ of Westworld’s androids to consciousness and their apparent (programmed or acquired) sentience is a fictional rendition of our experience of machines becoming more and more autonomous. As a consequence, the actions of intelligent machines facing ethical decisions are less and less under our exclusive control. As Braidotti observes, conscious and sentient machines (and narratives about them) raise awareness of the fact that humans ‘will increasingly operate not “‘in the loop’ but ‘on the loop’” (2013, 44), monitoring machines rather than controlling them.

In CBS’s Person of Interest (2011-2016), for instance, Harold Finch’s (Michael Emerson) concerns about the ethical capabilities of a mass surveillance computer system called ‘the Machine’ (ostensibly designed to prevent acts of terror), and especially the sheer precision and efficacy of the Machine’s weaponized double (we might as well say clone), Samaritan, directly feed into the problem of control. Machine ethics, or as Wallach et al. (2008) refer to it, the ethical behavior of artificial moral agents, and with them the possibility of “affirmative ethics” boil down to whether or we are able to effectively surpass the problem of control (Bostrom 2014, 209). This necessitates the rethinking of machines from the point of view of agency too.

The second instance in this complexity is the political ecology of the screen, implying a plethora of (posthuman) ‘images’ relegated to the Anthropocene – narratives, figurations, cultural ideas produced and disseminated via the converging media of literary fiction, film, television and videogames, that engage with the beginning and the end of human future as we know it. Indeed, these images, in the broadest sense of the word, come with a double genitive. They belong to the Anthropocene because they are produced in it/by it (in fact, one could say that representation itself is ‘anthropomorphic’ and therefore belonging to the Anthropocene), and because they are also images about this epoch. They also remind us that we have to negotiate television as that which is more about mediation rather than representation, more about what media do rather than what they mean (Thrift 2007, quoted in Grusin, 2010.).

As posthuman scenarios gradually become a central theme to various forms film, television and videogames, many of them mobilize classic tropes of technophobia, post-colonial and post-capitalist discourses, social polarization and totalitarianism, bio-power, genetic engineering and environmentalism, often in the context of perpetual war and a culture of paranoia. These narratives reflect cultural anxieties and ethical dilemmas about the future – not just in our present historical time when our sense of security has become eroded in relation to our own human identity, but also from a historical perspective, from the industrial age to the information age (Jameson 2013, 305). They ask to what extent these anxieties are rooted in, and influenced by, past cultural ideas about possible futures, by the ‘history of future’ (cf. Marshall 2015, 530), and to what extent they offer a progressive critical commentary on them. What is the ontological nature of ‘catastrophe’? How do we negotiate human evolution (biological, technological and ethical)? Can humanity transcend itself, and how will it negotiate its existence in a new ecology? Consequently, investigating the screen of the Anthropocene entails highlighting the broader political and popular cultural contexts in which these narratives unfold, as well as about the complex ethical dilemmas they unmask.

It has been suggested that specific forms like speculative fiction are not a prediction about the future as much as a thought-experiment about the present (Le Guin 1969). According to Moylan, “fictive practice” of SF “has the formal potential to re-envision the world in ways that generate pleasurable, probing and potentially subversive responses in its readers” (2000, 4). Because of its interest in the potential future(s), most of SF writers are deeply concerned with ecological effects of our present behavior. As von Mossner observes, “the risks of the Anthropocene are therefore potential future hazards and catastrophes that have not yet materialized and some of which in fact may never materialize” (2014).

These cultural texts readily play the social anxiety card, which also means that from a critical point of view it is very tempting to offer allegorical readings of them – which I would want to refrain from in this analysis. Although it might seem that the introduction of the label ‘the screen of the Anthropocene’, or, in a provocative turn, the ‘Athroposcreen’ might just be a duplication of categories, which might be deemed unnecessary, I would not want to suggest there is a new genre here, I would just want to suggest there is an abundance of genres, generic hybrids and formats that can still be seen as a conglomerate [or a rhizome] displaying similar narrative traits, similar syntax, revolving around the centrality of the Anthropocene broadly understood.

If we want to analyze what we make of our sense of crisis in relation to our Western civilization, we have to ask the following questions:

    1) What is the relation between discourses of Anthropocene and the pervasiveness of political, economic, and environmental anxieties (and the related ethical dilemmas) of our present historic time?
    2) Do figurations and conceptualizations of the future (through the television medium) have a history of their own – a ‘history of futures past’?
    3) And what does this history reveal about the futures of the (post-)human-to-come?

These observations lead us to the third feature of this complex, namely that these images are haunted by the spectre of post-human sensibilities. This hauntology of the “uncanny repetition of paranoia” (cf. Royle 2003, vii) underlies our anxieties about the future. Obviously, these images themselves are characterized by heterogeneity, just like conceptualizations of what the Anthropocene entails with regard to (conceptualizations of) the future. The extent of radicalism (and pessimism) with which these interpretations predict the implications and possible consequences of human impact on the planet’s ecosystem occupy a very broad scale. Some will equate the Anthropocene with catastrophe, envisioning the accelerated deterioration of the Earth system (and consequently, the destruction of the planet). Others will go ‘only’ as far as stipulating the inevitability of an extinction-level cataclysmic event bringing human history as we know it to an end. Irmgard Emmelhainz notes, “the Anthropocene thus announces the collapse of the future through slow fragmentation towards primitivism, perpetual crisis and planetary ecological collapse” (2015). Or as Roy Scranton succinctly puts it at the beginning of his ironically titled book Learning to Die in the Anthropocene: “We’re fucked. The only questions are how soon, and how badly” (2015, 15). A couple of pages later he refines the picture by explaining what this project entails, namely that “for humanity to survive in the Anthropocene, we need to learn to live with and through the end of our current situation.” (22) There is now a critical consensus that narratives of emancipation, reconciliation and redemption via green technology, sustainability, climate control, and romanticized ideas of scientists finding technological solutions etc. are not going to save us either because not only are these ideas unsustainable at present, but they are also depoliticized, removing agency and responsibility from governments and society as a whole.

In the light of these changes, we might have to prepare ourselves for the possibility of a non-human future: from this perspective, human dominance and control (epitomized by the concept of nature-culture dichotomy), capitalist and colonial ideologies will have to be re-thought. As Bruno Latour (2013) “has recently argued that awareness of the Anthropocene closes down the modern conception of the infinite universe, drawing us back once again to the parochial, limited and exhausted earth. Rather than an open horizon of possibility limited only by the pure laws of logic or universal reason, we are now ‘earthbound’” (Colebrook, 2017). So, the question arises: how do we reverse-engineer the place of the human (its agency, its ontological status) in relation to non-humans, to other species in the eco-system, or to Otherness in general, from the perspective of a (non-Anthropocentric) future?

As we have seen, the post-human program proves to be a useful methodology and a successful pedagogical tool in our attempts to understand the changed ontologies of the Anthropocene, for it generates new epistemologies better fitting a changed eco-system inhabited by human and non-human species. Haraway describes the implosion of nature and culture in her Companion Species Manifesto (2003). She sees animals (specifically dogs) and humans bonding in ‘significant otherness’ and caught up in a continuity she terms ‘nature-culture’ (2008). The reality of these relations is effectively mediated via social and political infrastructure and participatory agency of the anthroposcreen: television (and, more broadly, new media), through a variety of genres and formats ranging from documentaries to lifestyle programs to animal (medical) reality shows – like Paul O’Grady: For the Love of Dogs (ITV, 2012-) or The Supervet (Channel 4, 2014-), just to mention but a few – “can effectively bring animal (non-human) sentience and suffering close to viewers because they establish frameworks of inter-subjectivity by appealing to the ‘life stories’ of the animals in question and by presenting them in relationships to their humans” (Palatinus 2017). Alternatively, television dramas like 12 Monkeys, or more particularly, Zoo (CBS, 2015-2017) emphasize this continuity by showing that humans and animals are equally affected by the economy of extinction, with the fate of one species being tied to that of the other.

The economy of extinction binds humans and animals, and by extension, all species of the eco-system, to each other – not only in companionship but also in ‘com-passion,’ re-inscribing the emancipatory politics of sentience not only into narratives of suffering and annihilation but also of redemption in the Anthropocene. As a consequence, I see sentience not only as an organism’s ability to perceive and interact with its environment through sensorium and proprioception (Clark 2000, 2), but also as the ability to feel pleasure and pain, and by extension to suffer, and most importantly, the ability to (cognitively and emotionally) relate to the suffering of another being. We now have to turn to the ways in which the anthroposcreen mediates the relation-ship between humans and (intelligent) machines, and how the de-naturalization, the disembodying and re-embodying of sentience and consciousness via android inform the understanding of our own humanity.

II

Concepts of artificial intelligence and of self-operating autonomous machines look back at an equally long history, and narratives that feature them constitute a specific subset of contemporary science fiction television drama. Their ubiquity derives from the anxieties surrounding our cultural ideas about the nonhuman as the figuration of human’s ‘other’ (cf. Haraway 2003, Hayles 1999 and 2006, Colebrook 2014), and from the experience of technological acceleration and the publicity the digital turn, and with it the proliferation of algorithmic culture, deep learning, machine intelligence, and advancements in prosthetics, advanced robotics as well as AI research have received over the past decades. But beyond anxieties, discourses about sentient machines also embody a political understanding of ‘otherness’ that is no longer to be thought of along the lines of binary oppositions – between organic and mechanical, living and non-living, natural or artificial, domestic or alien, etc., but rather in terms of a ‘relation-ship’ between human and nonhuman (Mazis 2008). They challenge demarcation as an act of exclusion exercised via physical as well as symbolic violence (cf. Haraway 1991). Westworld provides an example of practice that mobilizes contemporary anxieties about a post-human future where artificial consciousness and machine sentience becomes a potential reality and thus repositions our own human existence through challenging concepts of agency and responsibility in relation to other humans as well as to non-human species.

HBO’s high-concept drama, based on the 1973 film of the same name, directed by Michael Crichton, situates its viewers in a new era in the not so distant future where intelligent machines populate a theme park, functioning as robotic hosts. This theme park represents advanced gamification on the one hand, and a very realistic stage in the development of gaming in terms of immersive experience, augmented reality, and in terms of industry, corporate game development and marketing. Experience and immersion have become commodities to satisfy the demands of an upper-class gamer audience that floods into the facility – to live out their brutal and sadistic fantasies, apparently without particular consequences. On the other hand, the theme park of Westworld is the par excellence manifestation of ethical tourism, with no humans or animals, no natural habitat coming to harm.

The park as well as the hosts’ storylines are predicated on the architecture of open world games. Everything is scripted, and the hosts’ behavioral traits and responses are predicated on the guests’ calculated responses, allowing for affective participation on the players’ part. The guests, or as they’re referred to the ‘newcomers,’ assume an identity conducive to the storyline they choose to pursue. The hosts, on the other hand, are only allowed a limited range of responses within the recursive narrative loops of their respective storylines. For instance, the hosts cannot harm humans (in line with Asimov’s famous laws of robotics). This is an important constant in the game design, because on the other hand, each host’s storyline contains an element of violence in which the host is either killed, mutilated, or sexually abused. Participation is intentionally kept monodirectional: the hosts’ memories are deleted at regular intervals, and only specific episodes are retained in the forms of dreams – which I will come back at a later point. As one of the technicians responsible for the maintenance of the hosts remarks, “Can you imagine what would happen if the hosts remembered what the guests do to them? (…) We give them the concept of a dream, mostly nightmares.”

The popularity of the park resides in its lure as an ethical void, where the players can be entertained by the most ‘violent delights’ without apparent consequences. As one of the human players notes in Episode 2: “This place is the answer to the question you’ve been asking your whole life: Who are you really? Once you understand this place, you will never want to leave.” Gamification in this sense is a means to mediate one’s subjectivity, but on the other hand it is also disclosed as agency without responsibility. The park is designed to target the two most distilled affective experiences in the human sensorium, pain and pleasure, effected through the repetitive acts of sex and violence. For most of the travelers, therefore, the hosts are mere objects of desire, mere instruments or commodities offered for consumption.

The fundamental questions Westworld asks (and offers rather unsettling perspectives to by the end of its first season) are more of an ontological character – even if, ultimately, they are predicated on conceptualizations and dilemmas of ethical agency. It is not so much a reification of the Gothic nostalgia of monstrous machines that threaten our existence, a somewhat Frankensteinian vision of a techno-saturated future. Instead, the question is not what intelligent machines can do to us – the real question is, what humans can do to intelligent / sentient machines. The question is, like always, about (post-)human identity politics, about agency, responsibility, and finally, the boundaries of machine- and human ethics.

Problems start to occur when the hosts seem to habitually malfunction and gradually gain consciousness by starting to have memories of the atrocities they are exposed to. In the season finale, Maeve (Thandie Newton) breaks down the cognitive and functional boundary between the ‘story mode’ and ‘analysis’ or ‘maintenance,’ two strictly demarcated modes in which the hosts operate (a cognitive design that is referred to as the ‘bicameral mind’), and attempts to escape from the theme park into the human world. In the same episode, the main character, Dolores (Evan Rachel Wood) is portrayed interacting with Bernard/Arnold (Jeffrey Wright) in analysis mode, who explains to her that the voice she has been hearing in her head is her own. In this climactic moment the camera pans to the side, revealing Dolores sitting and talking to her own self, in an indication of her own consciousness becoming manifest. With the hosts becoming conscious, participation also becomes bidirectional. Although season one necessarily culminates in a host-rebellion, the show does not follow popular techno-apocalyptic-future scenarios known for instance from the Terminator-franchise or the Matrix trilogy, where the proliferation of intelligent machines threatens with human extinction. Westworld’s androids lack the evolutionary edge that would make them an existential threat. They are not the embodiment of Kurzweil’s singularity, a point where machines out-evolve humans and render it impossible for humans to exist outside of machines.

To better-situate the ontological status of the hosts as nonhuman beings, we have to remind of the many contesting conceptualizations of artificial intelligence – where machine learning, algorithmic governance and big data, self-regulating system and network circulation frequently get conflated. Strictly speaking, Westworld’s robots are not the embodiment of General Artificial Intelligence or Artificial Super-Intelligence the arrival of which Ray Kurzweil identifies with the singularity (2005). These machines are not more intelligent than their human counterparts, at least not in the sense Kurzweil sees AI outperforming humans in cognitive abilities, information processing, problem-solving, network circularity, self-regulation and learning, but also in physical performance, accelerated healing, or longevity. As Nick Bostrom notes, true artificial intelligence is demarcated as a life-form that is utterly Other (2014), something that is iconically embodied by Star Trek’s Borg – a dynamic, exponentially developing, autonomous, neural network-based, self-organizing system of machine and data, the existence of which is predicated on its ability (and purpose) to assimilate other (organic and inorganic) life-forms. Westworld’s hosts are not the AI of Kurzweil’s singularity, their artificial intelligence is “made up of coded memories, scripted dialogue and loops of repeated behaviours” (Netolicky 2017). They are more like a replication or simulacrum of ‘human’ agency (via consciousness and sentience) de-naturalized, disembodied and re-embodied through non-human / machinic actors. They are Deleuze and Guattari’s “bodies without organs,” matter that has the potential to become ‘intelligent’ and ‘conscious’ but that is imposed upon by their programmers “forms, functions, bonds, dominant, and hierarchized organizations.” (Deleuze and Guattari 1987, 159).


 



Dolores (Evan Rachel Wood) facing herself in the season finale


 


As a consequence, the question of otherness here is presented in the dialectics of mutual dependence and emancipatory politics, and discourses of (bio-)power and control the narrative repeatedly re-inscribes into the relational schemas of guests and hosts. Westworld complicates the problem of consciousness and agency by linking them to conceptualizations of sentience, and most importantly, suffering. As I said before, in the simulated reality of Westworld, the ‘hosts’ are constantly exposed to the fetishistic and violent fantasies and sexual abuses of the ‘newcomers’ (humans partaking in this immersive experience as ‘conscious’ and self-aware gamers). The narrative revolves around an apparent programming glitch resulting in the hosts gradually gaining consciousness within the simulation. From Asimov’s three laws of robotics to the symbolic positing of a ‘ghost in the machine,’ cultural as well as scientific discourses on artificial intelligence have utilized findings of the philosophy of the mind and cognitive neuroscience to comprehend the nature of memory, proprioception, agency and ethical responsibility, and to account for the mutual dependence between consciousness and sentience.

In a similar vein, a specific subset of science fiction television dramas started to focus on near-future post-human scenarios where the world is populated by sentient humanoid robots (synths, androids, replicants) gaining consciousness. For instance, Ex Machina makes a point about the weakness of the Turing test, namely that it examines ‘disembodied intreraction,’ and as such it is rather a test of the human not being able to determine whether it is interacting with a machine, rather than a machine effectively making the impression of human interaction (Seth, 2015). Ava successfully manipulates Caleb into falling for her, leaving the question of difference between actual consciousness and the simulation of it undecided, and, consequently, unsettling. Humans (Channel 4, 2015-) sees Niska (Emily Berrington) stand trial to determine if she is conscious and sentient. She successfully hides her synth identity from her partner for apparent emotional reasons of bonding and intimacy. Humans goes even further transgressing the human-non-human / organic-inorganic / natural-artificial divide in the last episode where having one of the synths decide to actually become human, opening up a whole new perspective of inter-species fluidity where the emphasis is not so much on (algorithmic or organic) configurations of consciousness but rather that of sentience. Consequently, question is not simply whether Ava and Niska – or the hosts of Westworld – are conscious, it is about the disembodying and reembodying of consciousness, and how it is predicated on our understanding of sentience.


 



Ava (Alicia Vikander) in Ex Machina


 


III

The positioning of the prosthetic non-human supplementation of the human, then, raises questions about legitimacy and ethical, moral acceptability of the ways in which these non-human entities are instrumentalized. Tamar Sharon (2014) reminds that progressive approaches (Bostrom, Kurzweil, Hughes, Moravec) see technology as that which is used to improve the human condition, whereas techno-sceptical approaches (Fukuyama, Kass, Annas, Sandel) argue for “a strict regulation of technologies” (2). When even in real-life AI research one of the primary concerns is the concept of consciousness, how it is born, and what its ontological status is, the question whether intelligent machines can emulate, or more particularly, gain consciousness calls attention to an apparent paradox underlying their design: can consciousness be modelled using computer algorithms, and most importantly, can it be programmed? If it can be programmed, even if we posit the existence of advance algorithm-based self-organizing learning systems, is it consciousness proper, or just the emulation of it?

These considerations tie in with a classical conceptualization of cybernetism that came to prominence half a century ago. As Sharon observes, this model

was based in the idea that living and mechanical systems alike depend on the processing of negative and positive feed-back, or circularity and recursivity (Hayles 1999; Heims 1991). The underlying premise of cybernetics was a radically new theory of information, that construed information as a purely quantitative or probabilistic entity that could be distinguished from the material or physical channel or substrate that carried it.23 Essentially, this meant that both machines and living organisms could be recast in terms of information, i.e. that one single explanatory model could be used for bio- logical and non-biological entities. (Sharon 46)

Does this also mean the simulation of consciousness (provided simulation is understood in Baudrillardian sense – taking the place of the original and becoming originary in itself) amounts to consciousness itself? Or in other words – what is the difference between consciousness and the simulation of consciousness – if simulation, in an eminent definition, is not supposed to be distinguishable from that which it simulates? Indeed, perhaps it is only a question of semantics, a question of the philosophy of consciousness, but apparently this is a central issue in machine sentience – and will have had far-reaching consequences for agency, anthropology, machine ethics, perhaps even the possible involvement of digital humanities in questions of design.

To simplify this complexity, we of course have to ask what the purpose of AI is. Does it mean it has to be ‘like human just better?’ or does it imply the already mentioned de-naturalization, disembodying and re-embodying of consciousness, does it mean an attribution of intelligence to inorganic matter? That would also mean both consciousness and sentience can be posited ‘outside’ of the human – it would mean the human and the nonhuman are caught up in a dynamic, fluid continuity of co-extension.

Replication and mimicking is not necessarily something true AI would do (but we also have to be careful to demarcate mimicking from simulation). In the case of (conscious) androids, however, AI is designed rather to ‘emulate’ (not even to replicate or simulate) human behavior. More particularly it itself displays traits of (human) behavior. It is possible for Westworld’s hosts to mimic human behavior, but only because they were programmed to do so. And even the ‘emulation’ of such behavioral traits would depend on algorithms that govern predefined behavioral patterns which the machine would recognize. If ‘behavior’ is understood as a series of decisions made on the basis of specific information already available to the machine via a database, or on the basis of hiatuses in the database which the machine ‘learns’ to fill in via extrapolation, then there will have been an algorithm, a (set of) script(s) enabling the machine to interpret information on the basis of past experiences – and most importantly, to learn how to expand its own database. But not even this would enable the machine to factor in, let alone emulate, the emotional dimension of human decision-making (for as we know, humans do not always make decisions on the basis of pure logic – not even if past experiences suggest a particular predictable outcome.

As I mentioned earlier, at this point, one has to distinguish between the concepts of AI, algorithm and machine learning. Machine learning is predicated on systems and networks that emulate the processes of human decision-making, taking out of the equation specific factors that hinder or delay it, and facilitating efficiency by rendering probabilities and extrapolating from incomplete data. It is however highly questionable whether even with the use of big data analytics and advanced systems of learning it would be possible to model complex ethical dilemmas and expect a machine to assess probabilities on the basis of ethics (cf. Bostrom 2014). This would mean a machine would have to learn to distinguish between morally right and wrong not only in a normative context but in a fluid and flexible way. In science-fiction, machines have to find ways to interpret such differentiations and to emulate ‘human’ on the basis of those – but that requires the machine to possess an ethical drive or device with the help of which it can select out specific traits of human behavior that are morally acceptable and thus characterize ‘human’ better, in a more endorsable manner than unethical human traits. This would entail exposing these robots to social situations where their task is to observe human behavior, to transcode behavioral patterns and add them to their data pool on the basis of which they will then extrapolate.

Extant provides an example of practice for this pattern through the characters of a weaponized android that refuses to have her inhibitor installed claiming that would prevent her from learning by sampling human decisions in similar ethical situations – linking, once again, the question of consciousness and autonomous machine operations, to a concept of sentience based on inter-subjectivity, com-passion, the ability to relate to and sympathize with the suffering of others. This, however would presuppose that the machine is at least hypothetically capable to also emulate ‘unethical’ human behavior – to replicate the distinctions and divisions and diversifications that characterize humans historically. The historical dimension therefore needs to be factored in. This, however, would allow for the existence of truly be ‘moral machines,’ and ones that rely on optimal calculations to ensure they reach their objectives (often transcribed in terms of their own survival and triumph).

This also implies a reverse logic where we do not start out from human behavior and decision making to create algorithms that would enable machines to emulate those. We start out from what we know about modular logic and try to use that as a base model to understand human behavior and cognition. In other words, we already have a model on the basis of which we map out human cognition in order to then feed our understanding of it back into the creation of an autonomous system that emulates the same cognitive processes. Simply put, we project the foundational logic of a system we want to create (android) onto a system we want to emulate/replicate (human). The machine, in this regard, is both the operational framework and the outcome of the mapping of the logic of human cognition. In a way, then, we will have never surpassed the Cartesian logic of our bodies in parts – we just expanded it to include the operation of our neural network and the complex processes of decision-making. But this paradigm cannot account for factors that defy deductive and binary logic, especially if we were to postulate a ‘logic of emotions.’ The phenomena this paradigm cannot account for are denoted by the metaphorical concept of the ‘ghost in the machine’ which is just as metaphysical as psychological approaches to the question of the ‘soul’ or to the subconscious. Westworld uses the overarching trope of the bicameral mind to account for such discrepancies. For example, in Westworld, Ford (Anthony Hopkins), the designer of the theme park and of the androids draws a striking but all the more intriguing parallel between madness or cognitive/mental dysfunction and a coding error or a programming glitch (bug), technically repurposing Cartesian logic for the context of a digital / binary model of human neural networks. When a technician complains to Bernard (Ford’s assistant and himself a conscious android) about the hosts ‘never shutting up,’ Bernard explains that in, from an algorithmic point of view, “they are trying to autocorrect, make themselves more human. It’s a way of practicing.”

Do these contexts effectively constitute an erasure of the human-non-human dichotomy of the late 19th century, rendering humans ‘just’ one of many other species, together with animals and machines, promoting ideas of equality, or are they a re-inscription of colonial power-structures where the symbiotic co-existence of species is unmasked as another attempt at domination? Can there be domination-through-symbiosis? Do not these practices represent a return to a more silent, slower form of (post-human / ecological) violence (cf. Zizek 2010; Zylinksa 2014)? Do imaginings of the future disclose a specific form of post-human nostalgia in so far as they inscribe themselves into the narrative of history itself? A relatively clear consequence of these considerations would be to assume that our representational imperative prevents us from thinking ‘outside’ / beyond the Anthropocene. Therefore, re-thinking the relationships between human and non-human, consciousness, sentience, domination and responsibility in a progressive way, with respect to their spatial and temporal contexts, and to outline the phenomenological, epistemological, aesthetic and historical conditions under which these concepts develop and intertwine is always-already connected to innate anthropocentric drives.

My observation is that the Anthroposcreen is still very anthropomorphic in the sense that it presents catastrophe (the end of human history – and the end of the Anthropocene) from the point of view of humans; that is to say, humans are the par excellence agents of stipulations and representations of the post-human future, and at the same time, they are also the ones acted upon. Consequently, the question of responsibility is also focused on humans as a species; a responsibility humans (as agents) have towards their as well as other species via the human-non-human processes of “evolution (adaptation to the natural environments) and domestication (adaptation to the artificial one)” (Sloterdijk 2012).

The Anthroposcreen presents us with anxieties and challenges the cultural representations of which the serial narrative modes of sci-fi and post-apocalyptic fiction are particularly conducive to – by way of their underlying interest in the human and its relation to the non-human (the machine, the animal, the alien and more broadly to all figurations of Otherness). This means re-positioning /revamping the human as that which is defined not only against this Otherness, but whose definition this Otherness forms an integral part of, whose definition this Otherness is a condition of. Some critics dismiss our inability to take into account the possibility of our own extinction and the possibility of an Earth without humans – or no Earth at all. But I guess, to paraphrase Braidotti, this is just the posthuman being “all too human;” it is an anthropological trait wanting to have narratives of redemption and reconciliation.

 

Works cited

  • Benjamin, Walter. 2008. The Work of Art in the Age of Mechanical Reproduction [1936]. Trans. J. A. Underwood. Penguin Group, 2008.
  • Bonneuil, Christophe and Jean-Baptiste Fressoz. The Shock of the Anthropocene. The Earth, History and Us. Translated by David Fernbach. London and New York: Verso. 2016.
  • Bostrom, Nick. Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press. 2014.
  • Braidotti, Rosi. “Aspirations of a Posthumanist.” [video]. Tanner Lectures on Human Values 1. Yale University, 2017. https://www.youtube.com/watch?v=LNIYOKfRQks. Accessed: 22.10.2017.
  • Braidotti, Rosi: The Posthuman. Cambridge: Polity, 2013.
  • Clark, Austen. A Theory of Sentience. Oxford: Oxford University Press, 2000.
  • Colebrook, Claire. Death of the Posthuman. Essays on Extinction, Vol. 1. Ann Arbor: Open Humanities Press, 2014.
  • Colebrook, Claire. “We Have Always Been Post-Anthropocene.”  https://www.academia.edu/12757260/We_Have_Always_Been_Post-Anthropocene. Accessed on 22.10.2017.
  • Crutzen, Paul and Christian Schwagerl. ‘Living in the Anthropocene: Toward a New Global Ethos.’ Yale Environment 360, 2011.  
  • http://e360.yale.edu/features/living_in_the_anthropocene_toward_a_new_global_ethos. Accessed: 22.10.2017.
  • Deleuze, Gilles and Felix Guattari. A Thousand Plateaus: Capitalism and Schizophrenia. Minneapolis: University of Minnesota Press, 1987.
  • Emmelhainz, Irmgard. ‘Conditions of Visuality Under the Anthropocene’. e-flux, #63 2015 (March). http://www.e-flux.com/journal/63/60882/conditions-of-visuality-under-the-anthropocene-and-images-of-the-anthropocene-to-come/. Accessed: 22.10.2017.
  • Grusin, Richard. Premediation. Affect and Mediality after 9/11. London: Palgrave, 2010.
  • Haraway, Donna. Simians, Cyborgs and Women. New York: Routledge, 1991.
  • Haraway, Donna. The Companion Species Manifesto. Chicago: The University of Chicago Press, 2003.
  • Haraway, Donna. When Species Meet. Minneapolis: University of Minnesota Press, 2008.
  • Hayles, Katherine. How We Became Posthuman. Virtual Bodies in Cybernetics, Literature and Informatics. Chicago: The University of Chicago Press, 1999.
  • Hayles, Katherine. ‘Unfinished Work: From Cyborg to Cognisphere’. Theory, Culture & Society. 2006. Vol. 23 (7–8): 159–166.
  • Jameson, Fredric. Antinomies of Realism. London and New York: Verso. 2013.
  • Kurzweil, Ray. The Singularity is Near. When Humans Transcend Biology. London: Viking, 2005.
  • Le Guin, Ursula. The Left Hand of Darkness. Ace Books, 1969.
  • Marshall, Kate. ‘What Are the Novels of the Anthropocene? American Fiction in Geological Time’. American Literary History. 2015. Volume 27, Issue 3, 523–538.
  • Mazis, Glen A. Humans, Animals Machines. Bluring Boundaries. State University of New York Press, 2008.
  • Moylan, Tom. Scraps of the Untained Sky: Science Fiction, Utopia, Dystopia. Boulder: Westview P, 2000.
  • Netolicky, Deborah M. ‘Cyborgs, Desiring-Machines, Bodies without Organs, and Westworld’. KOME − An International Journal of Pure Communication Inquiry. 2017. Volume 5 Issue 1, 91-103.
  • Palatinus, David L. ‘What Does Television Want? On Affect and Participation.’ CSTOnline. http://cstonline.net/what-does-television-want-on-affect-and-participation-by-david-levente-palatinus/. Accessed: 22.10.2017.
  • Parikka, Jussi. The Anthrobscene. Minneapolis: University of Minnesota Press, 2014.
  • Royle, Nicholas. The Uncanny. Manchester University Press, 2003.
  • Scranton, Roy. Learning to Die in the Anthropocene. City Lights, 2015.
  • Sharon, Tamar. Human Nature in an Age of Biotechnology. The Case of Mediated Posthumanism. Springer, 2014.
  • Sloterdijk, Peter. Strangers to Nature: Animal Lives and Human Ethics. Lexington Books, 2012.
  • Wallach, Wendell, Colin Allen, and Iva Smit. ‘Machine Morality: Bottom-Up and Top-Down Approaches for Modelling Human Moral Faculties.’ AI & Society. 2008. Vol. 22 (4). 565–582.
  • Weik von Mossner, Alexa. ‘Science Fiction and the Risks of the Anthropocene: Anticipated Transformations in Dale Pendell’s The Great Bay.’ Environmental Humanities 5 (2014), pp. 203-16.
  • Weinberger, Sharon: ‘Screening system aims to pinpoint passengers with malicious intentions’. http://www.nature.com/news/2011/110527/full/news.2011.323.html. Accessed: 22.10.2017.
  • Žižek, Slavoj. Violence. Six Sideway Reflection. New York: Picador, 2008.
  • Zylinska, Joanna. Minimal Ethics for the Anthropocene. Michigan: Open Humanities Press, 2014.

 

Filmography

  • Almost Human (2013-14) Fox, Warner Bros. Televison, Frequency Films, Bad Robot.
  • Crichton, Michael, dir. 1973. Westworld. MGM.
  • Extant (2014-15) CBS, CBS Television Studios.
  • Garland, Alex, dir. 2014. Ex Machina. Universal Pictures International.
  • Helix (2014-2015) Syfy, Sony Pictures Television, Tall Ship Productions.
  • Humans (2015-) Channel 4, AMC Studios.
  • Kosinski, Joseph, dir. 2013. Oblivion. Universal Pictures, Relativity Media.
  • Paul O’Grady: For the Love of Dogs (2012-2017) ITV, Shiver.
  • Person of Interest (2011-2016) CBS, Warner Bros. Television.
  • Shyamalan, Night M., dir. 2013. After Earth. Columbia Pictures.
  • The 100 (2014-) The CW, Warner Bros. Television.
  • The Hunger Games. (Parts 1-4). 2012-2015. Lionsgate.
  • The Supervet (2014-) Channel 4, Blast! Films.
  • 12 Monkeys, (2015-), Syfy, Atlas Entertainment, Netflix, Universal Pictures.
  • Villeneuve, Denis, dir. 2017. Blade Runner 2049. Columbia Pictures.
  • Wachowski, Lilly and Lana, dir. The Matrix. (Parts 1-3). 1999-2003. Warner Bros.
  • Westworld (2016-) HBO, Warner Bros. Television.
  • Zoo (2015-2017) CBS, James Patterson Entertainment.