Tuesday, June 30, 2015

Some Inconsistencies of Eliminativist Materialism on Intentionality

[The following is a brief debate between an eliminativist materialist regarding human intentionality and a realist with some follow up comments from others.]

[...] What really happened when you 'read my comment' is some photons from your monitor entered your eyes, stimulated your retina to emit action potentials that entered your brain, causing a cascade of neurological activity that eventually resulted in some neuromotor activity involving your hands and a keyboard. For the sake of brevity let's call this, as well as actual talking, writing, listening etc language related behavior. The fact that you perceive another thing perform a language related behavior does not in and of its self [sic] prove that that other thing is a person or has qualia or intentional stances. You have perceived your own body performing language related behaviors. You have perceived through introspection that your body is animated by a person or a soul or a spirit that has intentionality. You have perceived through introspection that your language related behavior is made possible by your person or soul or spirit. Therefore you infer that other entities that perform language related behaviors have a person or soul or spirit. That inference is only as good as your introspective self-perception of your own person or soul or spirit. So we come back to the question Scott Bakker asked in the Scientia Salon essay. How good is your introspection?

Even if I grant your introspective access to your own person or soul or spirit you don't have introspective access to the person or soul or spirit of any other entity. All you have regarding other entities is the same sensory access you have to any other physical object. That sensory access allows entities to predict and manipulate the actions of other entities. Nobody denies the utility of this folk psychology within the problem ecology of human social interaction. Whether sensory perception of other humans or introspective perception of one's self is useful for determining if human beings are different in kind from insects or rocks or wisps of gas in interstellar space remains to be seen.

---

Source: Michael Murden, January 12, 2015 (8:35 p.m.), comment on Edward Feser, "Post-Intentional Depression," Edward Feser blog, January 11, 2015, accessed June 30, 2015,  http://edwardfeser.blogspot.com/2015/01/post-intentional-depression.html?showComment=1421123719547#c6194622446704249448.

---

If the whole argument is going to be based on the claim that intentionality is by nature unperceivable [sic], this needs to be established, not merely stated as if it were obvious; it is not, in fact, obvious, and is actively denied by quite a few positions. The notion that any appeal to intentionality depends entirely on introspection is little more than rhetorical sleight of hand, an attempt to rig the argument without actually putting reasons on the table.

---

Source: Brandon, January 12, 2015 (8:53 p.m.), comment on Edward Feser, "Post-Intentional Depression," Edward Feser blog, January 11, 2015, accessed June 30, 2015, http://edwardfeser.blogspot.com/2015/01/post-intentional-depression.html?showComment=1421124815511#c9071637458603993425.

---

What really happened when you 'read my comment' is some photons from your monitor entered your eyes, stimulated your retina to emit action potentials that entered your brain, causing a cascade of neurological activity that eventually resulted in some neuromotor activity involving your hands and a keyboard.

In other words, what happened is that some things happened that can only possibly be known by deliberately designed experiments whose achievement of their ends could be recognized in experience and used as part of causal reasoning to create theories which could be confirmed about the world.

---

Source: Brandon, January 12, 2015 (9:01 p.m.), comment on Edward Feser, "Post-Intentional Depression," Edward Feser blog, January 11, 2015, accessed June 30, 2015, http://edwardfeser.blogspot.com/2015/01/post-intentional-depression.html?showComment=1421125304116#c8577801872991552212.

---

[...] Have you ever perceived another person's intentionality? Did you perceive it through your physical senses or through some other modality? If so what was that other modality? I granted Scott's ability to perceive his own intentional states, although merely for the sake of argument. I denied his ability to perceive the intentional states of others. Are you claiming to be able to do so? Are you psychic? Can you read minds?

Regarding the 'read my comment' comment if the light were reflected from a sheet of papyrus in Egypt 3000 years ago the argument would be the same. The question of what happened is separate from the question how one determined what happened. The speed of light in a vacuum is independent of the means used to measure it. A scientist conducting neurolinguistics experiments is just an object interacting with other objects. You can't directly perceive his intentions. You can only infer them from his actions, unless you are psychic. And if you have some other method besides introspection for perceiving your own intentionality what is that other method?

---

Source: Michael Murden, January 12, 2015 (10:16 p.m.), comment on Edward Feser, "Post-Intentional Depression," Edward Feser blog, January 11, 2015, accessed June 30, 2015, http://edwardfeser.blogspot.com/2015/01/post-intentional-depression.html?showComment=1421129811627#c5545533361839985761.

---

Have you ever perceived photons, action potentials, neurophysiological cascades, the speed of light in a vacuum? Did you perceive them through your physical senses or some other modality? If so what was that other modality? And why are you repeatedly talking as if Scott said anything about introspective awareness being required?

You seem not to have read my comment very carefully; I didn't say anything about my own views, but pointed out (1) that your assumption is not in fact obvious and (2) that it is quite often actively denied. To take just one example out of a great many, Edith Stein's The Problem of Empathy is a book-long argument that it has to be denied. Thus you cannot reasonably go around pretending that you can get away with just stating it as if it were obvious.

You also seem to have missed the point of my second comment, which is that your claim about 'what really happened' does not in fact address the issue, which is that all these things you are claiming 'really happened' -- that usually means you are claiming that you know they happened, but since you are eliminating intentional language you must mean something else that you failed to state -- need to be reached in an eliminativist way. Most people get to them in an intentionalist way: they take them to be reached by scientific inquiry (both 'scientific' and 'inquiry' are terms appealing to intentionality) using experiments about various things ('experiment' being a term appealing to intentionality) that are deliberately designed ('deliberately' and 'designed' being terms appealing to intentionality) to achieve certain ends of inquiry (which is an appeal to intentionality) so that hypotheses and theories about the world (both 'hypothesis' and 'theory' being terms appealing to intentionality) which can be confirmed or disconfirmed (confirmation and disconfirmation are both forms of reasoning about how theories relate to what they are about, and therefore are appeals to intentionality) by reasoning about the possible causes of experimental effects ('reasoning about' being a phrase appealing to intentionality).

In short, your position commits you to the claim that what everyone calls science, including its complex of intentionality terms -- experiment, theory, hypothesis, confirmation, disconfirmation, prediction, correctness, model, etc. -- is not anything that really exists. So merely talking about neurons, photons, the speed of light in a vacuum, gets you nowhere, since these terms usually are regarded as meaningful and worth using only because everyone else subscribes to intentionalism with respect to scientific inquiry, which the eliminativist is committed to rejecting.

Thus despite your attempt to smear my pro-science view with the term 'psychic', you are the one actually appealing to clairvoyance: things happen in the world, which you can't actually claim to know anything about (because 'knowledge' is an intentional term) and this causes things to happen, which you treat as happening in your brain, that you don't actually know anything about, and your reason for holding this is not scientific (since 'scientific' is an intentional term) but just because things are happening that you don't know anything about and cannot give any reasons for. Unless, of course, you have an eliminativist account of what's really going on in the specific scientific experimentation that establishes all of these scientific concepts you are using, one that does not involve any of the intentionality that permeates how everyone else understands it?

---

Source: Brandon, January 12, 2015 (10:49 p.m.), comment on Edward Feser, "Post-Intentional Depression," Edward Feser blog, January 11, 2015, accessed June 30, 2015, http://edwardfeser.blogspot.com/2015/01/post-intentional-depression.html?showComment=1421131799898#c536722236875251083.

---

(1) The EMists position themselves as cutting-edge pro-science. But, paradoxically, actual scientific practices, and anything that gives us reason to scientific theories as reasons for anything (much less EM itself), vanishes completely. Almost everyone, including scientists doing science, interpret scientific practice in terms of intentionality. In the 1990s there was a vehement series of quarrels in philosophy of science that often get described as the Science Wars, between scientific realists and postmodernists. The latter got labeled as an anti-science view. But the postmodernists were doing with science exactly what the EMists think should be done with it: they eliminated all the intentionality-laden terms scientists like to use for what they are doing (truth, consistency, prediction), or else deflated them in various ways, and just talked about patterns of cause and effect. Even some of the justifications for doing this are exactly the same.

(2) The EMists position themselves as pricking any idea that human beings are special. But because they tend to present themselves as seeing through the human illusion, they repeatedly fall into the trap of talking as if they had an immediate God's-eye view of what the universe must be independently of any human perspective. This was part of what I was pointing out with my comments to Michael Murden, whose account kept appealing to what was 'really happening' in terms that are usually only regarded as having any value for describing what is 'really happening' on the bases of reasoning explicitly involving intentionality. (Certainly prediction and confirmation are both always understood to be reasoning appealing to intentionality.) This wouldn't be a problem if they could give a non-intentionality account of how the science works, but they have no model for doing so. Without such a model, they could only know their position is right by a sort of clairvoyance. They see themselves as giving an account of 'what really happens', but cut out any obvious way by which they could actually know 'what really happens'.

(3) This leads to the weird situation that they tend to justify their position on grounds that on their own view they apparently can't treat as justifying anything. And this is where the 'tu quoque' issue that these EMists, at least, seem obsessed with, arises -- of course, people are not, in fact, engaging in tu quoque but either (a) pointing out that EMists haven't given something that they need to give or (b) raising worries about epistemic self-defeat that need to be addressed. Either of these, of course, can only be handled by giving an eliminativist account of the grounds for thinking that eliminativism is true -- which they keep trying not to have to give.

(4) And we see in the comment noted by Jeremy Taylor that at least some of them explicitly formulate their position in terms that eliminates any possibility of taking their position to be true, coherent, or reasonable; and yet they still keep trying to claim that everyone else is engaging in logical fallacies, is saying something false, or does not have reasons for their view.

---

Source: Brandon, January 14, 2015 (3:07 p.m.), comment on Edward Feser, "Post-Intentional Depression," Edward Feser blog, January 11, 2015, accessed June 30, 2015, http://edwardfeser.blogspot.com/2015/01/post-intentional-depression.html?showComment=1421276831305#c5948348892429904358.

---

EM (or [Paul] Churchland at least) acknowledges that truth will have to be dispensed with. There must be a successor concept to truth, one that doesn't presuppose intentionality. But until that successor notion is given, what exactly is EM trying to accomplish? Its proponents know that they can't say, "EM is true." So what on earth do they want to say about their theory? If EM has an acknowledged reliance on a replacement of truth, which has not yet been given, how can I hope to make sense of it? I will grant that, if you give me the new concept, I'll attend to it and see if it is adequate. But until then, what do you want me to do? Why are we having this conversation? We can acknowledge together that we aren't so naive as to say that the goal is that I "believe" EM.

---

Source: Greg, January 16, 2015 (7:42 a.m.), comment on Edward Feser, "Post-Intentional Depression," Edward Feser blog, January 11, 2015, accessed June 30, 2015, http://edwardfeser.blogspot.com/2015/01/post-intentional-depression.html?showComment=1421422972531#c2593694274113175137.

---

It's as thought one of the easiest arguments against EM is trying to believe it. Because then you realize that you can't believe it and aren't supposed to. So then you look for the alternative: Maybe it's true, even though I can't believe it. But that's not right either, since we need to replace truth. So then we hope: EM is ______ even though it's not true. But what is ______? It's some "nice" relation that holds between EM qua theory and reality. But can it even be a relation, or are relations intentional?

This might be why EM rhetoric relies so much on science posturing and charges of illusion and question begging. It can't be said "This theory is true" or "These true statements provide evidence for this theory". So rather one 'argues' for EM indirectly, without enjoining people to 'belief'. But I can't understand how I could ever "accept" EM. Here is Bakker:

I think this is the most important topic of our day, and not only that, I out and out hate the consequences of my own view!


What are the consequences of EM? There can't be consequences of EM being true. If there are consequences, they are consequences of EM being ______ (to be filled in with truth's successor concept). So without that successor concept we don't even know what the relation is between EM and reality; there are no consequences of which we are aware.

---

Source: Greg, January 16, 2015 (8:08 a.m.), comment on Edward Feser, "Post-Intentional Depression," Edward Feser blog, January 11, 2015, accessed June 30, 2015, http://edwardfeser.blogspot.com/2015/01/post-intentional-depression.html?showComment=1421424511307#c4999321254701894055.

---

[...] EM will also have to give an account of just how, if EM is true, "terms" can have "senses" at all.

---

Source: Scott, January 16, 2015 (8:28 a.m.), comment on Edward Feser, "Post-Intentional Depression," Edward Feser blog, January 11, 2015, accessed June 30, 2015, http://edwardfeser.blogspot.com/2015/01/post-intentional-depression.html?showComment=1421425708238#c4000786231632866561.

Incoherence Objection against Eliminative Materialist Theories vis-a-vis Intentionality

Here’s one way to summarize the objection:

1. Eliminativists state their position using expressions like “truth,” “falsehood,” “theory,” “illusion,” etc.

2. They can do so coherently only if either (a) they accept that intentionality is real, or (b) they provide some alternative, thoroughly non-intentional way of construing such expressions.

3. But eliminativists reject the claim that intentionality is real, so option (a) is out.

4. And they have not provided any alternative, thoroughly non-intentional way of construing such expressions, so they have not (successfully) taken option (b).

5. So eliminativists have not shown how their position is coherent.

Now, exactly how does this argument beg the question? Which of the premises presupposes that intentionality (of any sort) is real? In fact the argument not only does not presuppose this, but it leaves open, for the sake of argument, the possibility that the eliminativist may find some consistently non-intentional way to state his position. It merely points out that eliminativists have not actually succeeded in providing one. The only way to rebut this argument, as I have said, is actually to provide such an account.

Of course, I also think that the eliminativist in fact cannot in principle provide such an account. But that judgment is not itself a premise in the argument, and someone could in principle accept the argument (i.e. steps 1 - 5) even if he disagreed with me that the eliminativist cannot in principle make his position coherent. So, again, there is no begging of the question.

I also fail to see how it is relevant that philosophers who affirm intentionality disagree over how to flesh out an account of human nature that affirms intentionality. (And the number of such accounts is nowhere remotely close to “thousands -- or even “hundreds” or indeed even “dozens” -- but let that pass.) What does that have to do with the question of whether eliminativism is coherent? All the eliminativist has to do is provide a way of stating his position without explicitly or implicitly using any intentional notions. Whether failed attempts to do so (like Churchland’s and Rosenberg’s) end up surreptitiously appealing to this specific form of intentional psychology or instead to that specific form is hardly to the point. What is to the point is that they end up surreptitiously appealing to some form or other of intentionality. The trick is to avoid appealing to any of them, and no one has pulled the trick off.

---

Source: Edward Feser, January 13, 2015 (5:16 p.m.), comment on Edward Feser, "Post-Intentional Depression," Edward Feser blog, January 11, 2015, accessed June 30, 2015,  http://edwardfeser.blogspot.com/2015/01/post-intentional-depression.html?showComment=1421198176883#c7873339741497488449.

Distinction: Reductionism vs. Eliminativism

If cognitive science has some breakthroughs on intentionality that naturalize it to physics, that's reductionism, not eliminativism. [...] If cognitive science reduces intentionality to a physics process, this establishes that intentionality exists as something that can be scientifically studied and analyzed (as whatever physics process cognitive science reduces it to). That would show that all eliminativists about intentionality are wrong, including Bakker. Reductionism is not eliminativism. And while very few people here I imagine are reductionist, we are all closer to the reductionist position than Bakker is, since reductionists agree with us on the primary point actually under dispute here.

---

Source: Brandon, January 14, 2015 (6:41 a.m.), comment on Edward Feser, "Post-Intentional Depression," Edward Feser blog, January 11, 2015, accessed June 30, 2015, http://edwardfeser.blogspot.com/2015/01/post-intentional-depression.html?showComment=1421246507704#c5670245010821610403.
---

[T]he dominant naturalistic position on intentionality is reductionist, not eliminativist, which holds both of the following:

(1) Intentionalism is true.

(2) Naturalism is true.

That is, reductionists hold that intentionality really exists and that there is reason to think that it admits of a perfectly naturalistic explanation in terms of physical processes, evolution, or what have you. Thus the bare fact of appealing to naturalism does not establish eliminativism; almost all naturalistic positions on the subject are reductionist, and reductionists about intentionality are intentionalists.

---

Source: Brandon, January 14, 2015 (8:35 a.m.), comment on Edward Feser, "Post-Intentional Depression," Edward Feser blog, January 11, 2015, accessed June 30, 2015, http://edwardfeser.blogspot.com/2015/01/post-intentional-depression.html?showComment=1421253336089#c6101963426624473851.

Repost: R. Scott Baker on "Writing After the Death of Meaning"

[The following article is an example of materialistic reductionism (or perhaps eliminativism since the author is not clear enough at least here) with respect to human intentionality.]

Abstract: For centuries now, science has been making the invisible visible, thus revolutionizing our understanding of and power over different traditional domains of knowledge. Fairly all the speculative phantoms have been exorcised from the world, ‘disenchanted,’ and now, at long last, the insatiable institution has begun making the human visible for what it is. Are we the last ancient delusion? Is the great, wheezing heap of humanism more an artifact of ignorance than insight? We have ample reason to think so, and as the cognitive sciences creep ever deeper into our biological convolutions, the ‘worst case scenario’ only looms darker on the horizon. To be a writer in this age is stand astride this paradox, to trade in communicative modes at once anchored to our deepest notions of authenticity and in the process of being dismantled or worse, simulated. If writing is a process of making visible, communicating some recognizable humanity, how does it proceed in an age where everything is illuminated and inhuman? All revolutions require experimentation, but all too often experimentation devolves into closed circuits of socially inert production and consumption. The present revolution, I will argue, requires cultural tools we do not yet possess (or know how to use), and a sensibility that existing cultural elites can only regard as anathema. Writing in the 21st century requires abandoning our speculative past, and seeing ‘literature’ as praxis in a time of unprecedented crisis, as ‘cultural triage.’ Most importantly, writing after the death of meaning means communicating to what we in fact are, and not to the innumerable conceits of obsolescent tradition.

So, we all recognize the revolutionary potential of technology and the science that makes it possible. This is just to say that we all expect science will radically remake those traditional domains that fall within its bailiwick. Likewise, we all appreciate that the human is just such a domain. We all realize that some kind of revolution is brewing…
The only real question is one of how radically the human will be remade. Here, everyone differs, and in quite predictable ways. No matter what position people take, however, they are saying something about the cognitive status of traditional humanistic thought. Science makes myth of traditional ontological claims, relegates them to the history of ideas. So all things being equal we should suppose that science will make myth of traditional ontological claims regarding the human as well. Declaring that traditional ontological claims regarding the human will not suffer the fate of other traditional ontological claims more generally, amounts to declaring that all things are not equal when it comes to the human, that in this one domain at least, traditional modes of cognition actually tell us what is the case.
Let’s call this pole of argumentation humanistic exceptionalism. Any position that contends or assumes that science will not fundamentally revolutionize our understanding of the human supposes that something sets the human apart. Not surprisingly, given the underdetermined nature of the subject-matter, the institutionally entrenched nature of the humanities, and the human propensity to rationalize conceit and self-interests, the vast majority of theorists find themselves occupying this pole. There are, we now know, many, many ways to argue exceptionalism, and no way whatsoever to decisively arbitrate between any them.

What all of them have in common, I think it’s fair to say, is the signature theoretical function they accord to meaning. Another feature they share is a common reliance on pejoratives to police the boundaries of their discourse. Any time you encounter the terms ‘scientism’ or ‘positivism’ or ‘reductionism’ deployed without any corresponding consideration of the case against traditional humanism, you are almost certainly reading an exceptionalist discourse. One of the great limitations of committing to status-quo underdetermined discourses, of course, is the infrequency with which adherents encounter the limits of their discourse, and thus run afoul the same fluency and only game in town effects that render all dogmatic pieties self-perpetuating.

My artistic and philosophical project can be fairly summarized, I think, as a sustained critique of humanistic exceptionalism, an attempt to reveal these positions as the latest (and therefore most difficult to recognize) attempts to intellectually rationalize what are ultimately run-of-the-mill conceits, specious ways to set humanity—or select portions of it at least—apart from nature.

I occupy the lonely pole of argumentation, the one that says humans are not ontologically special in any way, and that accordingly, we should expect the scientific revolution of the human to be as profound as the scientific revolution of any other domain. My whole career is premised on arguing the worst case scenario, the future where humanity finds itself every bit as disenchanted—every bit as debunked—as the cosmos.
I understand why my pole of the debate is so lonely. One of the virtues of my position, I think anyway, lies in its ability to explain its own counter-intuitiveness.

Think about it. What does it mean to say meaning is dead? Surely this is metaphorical hyperbole, or worse yet, irresponsible alarmism. What could my own claims mean otherwise?

‘Meaning,’ on my account, will die two deaths, one theoretical or philosophical, the other practical or functional. Where the first death amounts to a profound cultural upheaval on a par with, say, Darwin’s theory of evolution, the second death amounts to a profound biological upheaval, a transformation of cognitive habitat more profound than any humanity has ever experienced.

‘Theoretical meaning’ simply refers to the endless theories of intentionality humanity has heaped on the question of the human. Pretty much the sum of traditional philosophical thought on the nature of humanity. And this form of meaning I think is pretty clearly dead. People forget that every single cognitive scientific discovery amounts to a feature of human nature that human nature is prone to neglect. We are, as a matter of empirical fact, fundamentally blind to what we are and what we do. Like traditional theoretical claims belonging to other domains, all traditional theoretical claims regarding the human neglect the information driving scientific interpretations. The question is one of what this naturally neglected information—or ‘NNI’—means.

The issue NNI poses for the traditional humanities is existential. If one grants that the sum of cognitive scientific discovery is relevant to all senses of the human, you could safely say the traditional humanities are already dwelling in a twilight of denial. The traditionalist’s strategy, of course, is to subdivide the domain, to adduce arguments and examples that seem to circumscribe the relevance of NNI. The problem with this strategy, however, is that it completely misconstrues the challenge that NNI poses. The traditional humanities, as cognitive disciplines, fall under the purview of cognitive sciences. One can concede that various aspects of humanity need not account for NNI, yet still insist that all our theoretical cognition of those aspects does…

And quite obviously so.
The question, ‘To what degree should we trust ‘reflection upon experience’?’ is a scientific question. Just for example, what kind of metacognitive capacities would be required to abstract ‘conditions of possibility’ from experience? Likewise, what kind of metacognitive capacities would be required to generate veridical descriptions of phenomenal experience? Answers to these kinds of questions bear powerfully on the viability of traditional semantic modes of theorizing the human. On the worst case scenario, the answers to these and other related questions are going to systematically discredit all forms of ‘philosophical reflection’ that fail to take account of NNI.

NNI, in other words, means that philosophical meaning is dead.

‘Practical meaning’ refers to the everyday functionality of our intentional idioms, the ways we use terms like ‘means’ to solve a wide variety of practical, communicative problems. This form of meaning lives on, and will continue to do so, only with ever-diminishing degrees of efficacy. Our everyday intentional idioms function effortlessly and reliably in a wide variety of socio-communicative contexts despite systematically neglecting everything cognitive science has revealed. They provide solutions despite the scarcity of data.

They are heuristic, part of a cognitive system that relies on certain environmental invariants to solve what would otherwise be intractable problems. They possess adaptive ecologies. We quite simply could not cope if we were to rely on NNI, say, to navigate social environments. Luckily, we don’t have to, at least when it comes to a wide variety of social problems. So long as human brains possess the same structure and capacities, the brain can quite literally ignore the brain when solving problems involving other brains. It can leap to conclusions absent any natural information regarding what actually happens to be going on.

But, to riff on Uncle Ben, with great problem-solving economy comes great problem-making potential. Heuristics are ecological; they require that different environmental features remain invariant. Some insects, most famously moths, use ‘transverse orientation,’ flying at a fixed angle to the moon to navigate. Porch lights famously miscue this heuristic mechanism, causing the insect to chase the angle into the light. The transformation of environments, in other words, has cognitive consequences, depending on the kind of short cut at issue. Heuristic efficiency means dynamic vulnerability.

And this means not only that heuristics can be short-circuited, they can also be hacked. Think of the once omnipresent ‘bug zapper.’ Or consider reed warblers, which provide one of the most dramatic examples of heuristic vulnerability nature has to offer. The system they use to recognize eggs and offspring is so low resolution (and therefore economical) that cuckoos regularly parasitize their nests, leaving what are, to human eyes, obviously oversized eggs and (brood-killing) chicks that the warbler dutifully nurses to adulthood.

All cognitive systems, insofar as they are bounded, possess what might be called a Crash Space describing all the possible ways they are prone to break down (as in the case of porch lights and moths), as well as an overlapping Cheat Space describing all the possible ways they can be exploited by competitors (as in the case of reed warblers and cuckoos, or moths and bug-zappers).

The death of practical meaning simply refers to the growing incapacity of intentional idioms to reliably solve various social problems in radically transformed sociocognitive habitats. Even as we speak, our environments are becoming more ‘intelligent,’ more prone to cue intentional intuitions in circumstances that quite obviously do not warrant them. We will, very shortly, be surrounded by countless ‘pseudo-agents,’ systems devoted to hacking our behaviour—exploiting the Cheat Space corresponding to our heuristic limits—via NNI. Combined with intelligent technologies, NNI has transformed consumer hacking into a vast research programme. Our social environments are transforming, our native communicative habitat is being destroyed, stranding us with tools that will increasingly let us down.

Where NNI itself delegitimizes traditional theoretical accounts of meaning (by revealing the limits of reflection), it renders practical problem-solving via intentional idioms (practical meaning) progressively more ineffective by enabling the industrial exploitation of Cheat Space. Meaning is dead, both as a second-order research programme and, more alarmingly, as a first-order practical problem-solver. This—this is the world that the writer, the producer of meaning, now finds themselves writing in as well as writing to. What does it mean to produce ‘content’ in such a world? What does it mean to write after the death of meaning?
This is about as open as a question can be. It reveals just how radical this particular juncture in human thought is about to become. Everything is new, here folks. The slate is wiped clean.

[I used the following possibilities to organize the subsequent discussion]
Post-Posterity Writing

The Artist can no longer rely on posterity to redeem ingroup excesses. He or she must either reach out, or risk irrelevance and preposterous hypocrisy. Post-semantic writing is post-posterity writing, the production of narratives for the present rather than some indeterminate tomorrow.

High Dimensional Writing

The Artist can no longer pretend to be immaterial. Nor can they pretend to be something material magically interfacing with something immaterial. They need to see the apparent lack of dimensionality pertaining to all things ‘semantic’ as the product of cognitive incapacity, not ontological exceptionality. They need to understand that thoughts are made of meat. Cognition and communication are biological processes, open to empirical investigation and high dimensional explanations.

Cheat Space Writing

The Artist must exploit Cheat Spaces as much as reveal Cheat Spaces. NNI is not simply an industrial and commercial resource; it is also an aesthetic one.

Cultural Triage

The Artist must recognize that it is already too late, that the processes involved cannot be stopped, let alone reversed. Extremism is the enemy here, the attempt to institute, either via coercive simplification (a la radical Islam, for instance) or via technical reduction (a la totalized surveillance, for instance), Orwellian forms of cognitive hygiene.

---

Source: R. Scott Baker, "Writing After the Death of Meaning," Three Pound Brain blog, June 5, 2015, accessed June 12, 2015, https://rsbakker.wordpress.com/2015/06/05/the-case-against-humanism-writing-after-the-death-of-meaning/.

Repost: Hickman on Savage Capitalism

We seem oblivious to the fact that we live on a planet with finite resources, and like zombies in an endless frenzy, a swarm-fest of ever darkening movement we continuously squander what little remains of the future that our children will inherit. Like mortal gods we dream of machinic intelligences and transhuman enhancements as if the very taste of flesh in our zombie mouths were not enough to stay us against the terrible truth of our own extinction. [...]

Kierkegaard argues that individuals who do not conform to the masses are made scapegoats and objects of ridicule by the masses, in order to maintain status quo and to instill into the masses their own sense of superiority. Is there a hint of political correctness here? Are not our free spirits, even the great comics themselves under attack, the one’s for whom thought itself is a weapon against mediocrity? Do we not see that creeping spirit of failure everywhere? The eye that hides its guilt, its secret resentiments against life and light? The ones who would rather hide in the darkness of their anger and bitterness, seeking to tear down their enemies through sheer hatred and spite, accusation and critical acumen as if the power of rhetoric alone could triumph over the truth?

Berardi warns us that the delicate balance between man and his machines is over, we have become the victims of our own merciless creativity and the dividing line (“bifurcation”) is one between “machines for liberating desire and mechanisms of control over the imaginary“; that, in our time the “digital mutation,” the very force of technology as an automatism is moving through the social body like a swarming ravenous beast feeding on what remains of the social psyche (Franco “Bifo” Berardi. precarious rhapsocy (Automedia, 2010)). Sociologists like Zygmunt Bauman will tell us this is nothing but the effect of the “liquid times” we are living in:
the ‘liquid’ phase of modernity: that is, into a condition in which social forms (structures that limit individual choices, institutions that guard repetitions of routines, patterns of acceptable behaviour) can no longer (and are not expected) to keep their shape for long, because they decompose and melt faster than the time it takes to cast them, and once they are cast for them to set. (Bauman, Zygmunt (2013-04-16). Liquid Times: Living in an Age of Uncertainty (Kindle Locations 42-45). Wiley. Kindle Edition.) [...]
Our statesmen, policy makers, and globalist hypermedia think-tanks and yes men live under the sign of “plausible deniability”: a form of organizational cynicisms that allows them to escape the guilt of their own actions through a sense that the world is so complex no one individual or group can possible know or understand the crucial information needed to provide a solution to our existing crises. As Phillip Mirowski will remind us even our activists have no clue: the neoliberal worldview has become embedded in contemporary culture to such an extent that when well-meaning activists sought to call attention to the slow-motion trainwreck of the world economic system, they came to their encampments with no solid conception of what they might need to know to make their indictments stick; nor did they have any clear perspective on what their opponents knew or believed about markets and politics, not to mention what the markets themselves knew about their attempts at resistance (Mirowski, Philip (2013-07-09). Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown (Kindle Locations 6511-6514). Verso Books. Kindle Edition.). As Zizek will remind us:
The threat today is not passivity, but pseudo-activity, the urge to “be active,” to “participate,” to mask the nothingness of what goes on. People intervene all the time, “do something”; academics participate in meaningless debates, and so on. The truly difficult thing is to step back, to withdraw. Those in power often prefer even a “critical” participation, a dialogue, to silence-just to engage us in “dialogue,” to make sure our ominous passivity is broken. The voters’ abstention is thus a true political act: it forcefully confronts us with the vacuity of today’s democracies. (Zizek, Slavoj (2008-07-22). Violence (BIG IDEAS//small books) (p. 218). Picador. Kindle Edition. 218) [...]
In his novels of despair and hopelessness, Michel Houllebeq tells us that poet and philosopher alike must delve ‘into the subjects that no one wants to hear about. … Insist upon sickness, agony, ugliness. Speak of death, and of oblivion. Of jealousy, of indifference, of frustration, of the absence of love. Be abject, and you will be true.’ Beneath these words is a deeply moral thought: the vital thing is not to be against happiness, but against unthinking happiness; optimism that edits out the parts of living no one wants to hear about. That is the task – to stay awake to the world, without despair. It may be impossible, but (thankfully) there is no way of knowing. ‘You cannot love the truth and the world’, claims Houellebecq in the same essay. (Jeffery, Ben (2011-11-16). Anti-Matter: Michel Houellebecq and Depressive Realism (p. 90). NBN_Mobi_Kindle. Kindle Edition.)
---

Source: S. C. Hickman, "Savage Capitalism: The Culture of Denial in a Precarious Age," Dark Ecologies blog, June 15, 2015, accessed June 30, 2015, http://darkecologies.com/2015/06/15/terminal-capitalism-time-labor-and-the-infosphere/.

Nietzsche on Unconscious Influences of Philosophy

After such self-questioning, self-temptation, one acquires a subtler eye for all philosophizing to date; one is better than before at guessing the involuntary detours, alleyways, resting places, and sunning places of thought to which suffering thinkers are led and misled on account of their suffering; one now knows where the sick body and its needs unconsciously urge, push, and lure the mind – towards sun, stillness, mildness, patience, medicine, balm in some sense. Every philosophy that ranks peace above war, every ethic with a negative definition of happiness, every metaphysics and physics that knows some finale, a final state of some sort, every predominantly aesthetic or religious craving for some Apart, Beyond, Outside, Above, permits the question whether it was not illness that inspired the philosopher. The unconscious disguise of physiological needs under the cloaks of the objective, ideal, purely spiritual goes frighteningly far – and I have asked myself often enough whether, on a grand scale, philosophy has been no more than an interpretation of the body and a misunderstanding of the body. Behind the highest value judgments that have hitherto guided the history of thought are concealed misunderstandings of the physical constitution – of individuals or classes or even whole races. All those bold lunacies of metaphysics, especially answers to the question about the value of existence, may always be considered first of all as symptoms of certain bodies [....] I am still waiting for a philosophical physician in the exceptional sense of the term – someone who has set himself the task of pursuing the problem of the total health of a people, time, race or of humanity – to summon the courage at last to push my suspicion to its limit and risk the proposition: what was at stake in all philosophizing hitherto was not at all 'truth' but rather something else – let us say health, future, growth, power, life... [...]

A philosopher who has passed through many kinds of health, and keeps passing through them again and again, has passed through an equal number of philosophies; he simply cannot but translate his state every time into the most spiritual form and distance – this art of transfiguration just is philosophy. We philosophers are not free to separate soul from body as the common people do; we are even less free to separate soul from spirit. We are not thinking frogs, no objectifying and registering devices with frozen innards – we must constantly give birth to our thoughts out of our pain and maternally endow them with all that we have of blood, heart, fire, pleasure, passion, agony, conscience, fate, and disaster. Life – to us, that means constantly transforming all that we are into light and flame, and also all that wounds us; we simply can do no other.

---

Source: Friedrich Nietzsche, The Gay Science, ed. by Bernard Williams, trans. by Josefine Nauckhoff (Cambridge: Cambridge University Press, 2001), 5–6.

Repost: Mark Manson on Being Average

We all have our own strengths and weaknesses. But the fact is, most of us are pretty average at most things we do. Even if you’re truly exceptional at one thing — say math, or jump rope, or making money off the black gun market — chances are you’re pretty average or below average at most other things. That’s just the nature of life. To become truly great at something, you have to dedicate time and energy to it. And because we all have limited time and energy, few of us ever become truly exceptional at more than one thing, if anything at all. [...]

Which leads to an important point: that mediocrity, as a goal, sucks. But mediocrity, as a result, is OK.

Few of us get this. And fewer of us accept it. Because problems arise — serious, “[...] what’s the point of living” type problems — when we expect to be extraordinary. Or worse, we feel entitled to be extraordinary. When in reality, it’s just not viable or likely. For every Michael Jordan or Kobe Bryant, there are 10 million scrubs stumbling around parks playing pickup games… and losing. For every Picasso or DaVinci there have been about a billion drooling idiots eating Play-Doh and slapping around fingerpaints. [...]

Our lives today are filled with information coming from the extremes of the bell curve, because in the media that’s what gets eyeballs and the eyeballs bring dollars. That’s it. Yet the vast majority of life continues to reside in the middle. [...]

It’s my belief that this flood of extreme information has conditioned us to believe that “exceptional” is the new normal. And since all of us are rarely exceptional, we all feel pretty [...] insecure and desperate to feel “exceptional” all the time. [...]

There’s this kind of psychological tyranny in our culture today, a sense that we must always be proving that we’re special, unique, exceptional all the time, no matter what, only to have that moment of exceptionalism swept away in the current of all the other human greatness that’s constantly happening. [...]

Once you accept the premise that a life is only worthwhile if it is truly notable and great, then you basically accept the fact that most of the human population sucks and is worthless. And ethically speaking, that is a really dark place to put yourself. [Note: this is why "quality of life" is a very dangerous concept by which to measure human dignity.]

But most people’s problem with accepting being average is more practical. They worry that, “If I accept that I’m average, then I’ll never achieve anything great. I’ll have no motivation to improve myself or do something great. What if I am one of the rare few?”

This, too, is a misguided belief. The people who become truly exceptional at something do so not because they believe they’re exceptional. On the contrary, they become amazing because they are obsessed with improvement. And that obsession with improvement stems from an unerring belief that they are, in fact, not that great at all. That they are mediocre. That they are average. And that they can be so much better.

This is the great irony about ambition. If you wish to be smarter and more successful than everybody else, you will always feel like a failure. If you wish to be the most loved and most popular, then you will always feel alone. If you wish to be the most powerful and admired, then you will always feel weak and impotent. [...]

The ticket to emotional health, like physical health, comes from eating your veggies — that is, through accepting the bland and mundane truths of life: a light salad of “you’re actually pretty average in the grand scheme of things” and some steamed broccoli of “the vast majority of your life will be mediocre.” This will taste bad at first. Very bad. You will avoid eating it.

But once ingested, your body will wake up feeling more potent and more alive. After all, that constant pressure to always be something amazing, to be the next big thing, will be lifted off your back. The stress and anxiety of feeling inadequate will dissipate. And the knowledge and acceptance of your own mundane existence will actually free you to accomplish what you truly wish to accomplish with no judgments and no lofty expectations.

You will have a growing appreciation for life’s basic experiences. You will learn to measure yourself through a new, healthier means: the pleasures of simple friendship, creating something, helping a person in need, reading a good book, laughing with someone you care about.

Sounds boring, doesn’t it? That’s because these things are average. But maybe they’re average for a reason. Because they are what actually matter.

---

Source: Mark Manson, "In Defense of Being Average," Mark Manson website, June 18, 2015, accessed June 30, 2015, http://markmanson.net/being-average.

Cardinal Mercier on the Importance of a Sound Philosophy

Gentlemen, it is just because the philosophy that forms our intellectual environment so easily influences our whole being that it is so important that the student and seeker after truth should be equipped with a sound philosophy. Yes, a philosophy that grips facts and holds fast to them when it is brought into play in the domain of metaphysics, where it soars to the absolute. The philosophy of Aristotle, developed and defined by St. Thomas Aquinas, has pre-eminently the characteristic of healthy, sound realism.

---

Source: Cardinal Désiré-Joseph Mercier, Modernism, trans. by Marian Lindsay (St. Louis, MO: B. Herder Book Co., 1912), accessed June 30, 2015, chapter 1, http://www.catholicapologetics.info/modernproblems/modernism/mermodernism.htm.

Repost: Hickman on The Total Surveillance Society

[...] This sense that we are all traceable, that we have become data – to be marked and inscribed in a system of traces: gleaned, stored, organized, dispersed, sorted, analyzed and massaged; deconstructed and reconstructed into various modalities, pushed through specialized filters and segmented off algorithmically for analytical appraisal, reanalyzed by specialized knowledge-workers in the capitalist military-surveillance empire that then present their findings to higher echelons of this same global system to ultimately be registered and formalized into various linguistic traces and signifiers as adjuncts to the decisional apparatus of global governance as a system of command and control itself. This is the new world we live in, the merger of the military-industrial and security-surveillance empire of global capital. [...]

What we learn from most of these is the truth of its existence, the infrastructure that encircles not only America but the global system itself. The tip of an iceberg that seeks to trace and quantify, measure and capitalize on our lives from physical to mental and mark and inscribe it for its own ignoble purposes. But acknowledging that it does indeed exist is just a beginning. An important beginning to be sure, but only a first step in understand what we can do to resist and combat this behemoth that entraps us in its shadow world [...]

How could one escape the very technological systems that hide everywhere in plain site? Even if one were to log off, disband, disconnect from the electronic grid today: it would already have you captured in the traces its captured and analyzed and logged away within its (semi)permanent data storage systems. [...]

Another point [Zizek] makes is that all this massive data will ultimately end in confusion for those supposed experts using it: that let’s say, like our Congress who gets this massive Obama HealthCare book that is ten-thousand pages thick and realizes “Why should we read it? We know we cannot know what’s in the details, it would take years to decipher it. So they vote on what they do not know, not on what they actually know.” This our massive Big Data system: a system with so much data that even a quantum computer could not analyze and decipher, and even if it did: the humans who would benefit from it: our leaders would probably not read it, but would make decisions not on it but on their own ideological fantasies. This is Zizek’s point. [...]

Capitalism has begun to implode and destroy the very roots of its own power: private property in the visible and invisible relations of physical and subjective property. In totalizing surveillance capital is destroying the very base of its own social relations: the private individual – the Liberal Subject

---

Source: S. C. Hickman, "The Total Surveillance Society: The Endgame of Democracy," Dark Ecologies blog, June 28, 2015, accessed June 30, 2015, http://darkecologies.com/2015/06/28/the-total-surveillance-society-the-endgame-of-democracy/.

Thursday, June 18, 2015

The Perfect, the Extraordinary, and the Excellent

Perfectionism is trying to be extraordinary. The extraordinary here is beyond the ordinary and beyond the capacity of what is normal. Not everyone can be extraordinary; otherwise there would be no such thing as extraordinary.

Perfectionism is a rejection of limit, the limits imposed by our finitude. Every perfectionism pretends to overcome those limits but can do so only by imposing new limits. The person neurotically committed to perfectionism is not free from limits but enslaved to the demands of that "perfection."

We cannot aspire to the extraordinary because the extraordinary is a combination of chance talent, circumstances, and effort.

We can aspire only to what is within our capacity and potential, which when I begin, I do not know. All I do know is that my potential has a limit. Perhaps my potential is the extraordinary, but I cannot expect that.

The excellent is within the capacity of the ordinary and is the true perfection of the ordinary. Whatever is done, is done well, and this is excellent. If when achieving excellence, I attain to the extraordinary, that can be recognized only in hindsight. The extraordinary, for those with the potential for it, is the fruit of pursuing excellence, which all may and should pursue.

Friday, June 12, 2015

Repost: "Saving the 'Benedict Option' from Culture War Conservatism"

[...] There is a sense among many conservatives that the Benedict Option is really just a more sophisticated “I’m moving to Canada” lament by those who lost the Culture War. But is this white-flag-waving, defeat-and-retreat “Benedict Option” the same vision we find in the final pages of After Virtue? Or is it—as several interlocutors at Dreher’s blog have begun questioning—a repeat of the “take to the woods” strategy of American Protestant fundamentalism in the early 20th century?
There’s good reason to believe that the popularized Benedict Option diverges from MacIntyre’s understanding in several crucial ways. Take, for instance, the basic timeline. MacIntyre sees our condition as the result of many centuries of development in moral and political thought, while those advocating the popular version pinpoint the origins of the decline within the last decade—in the post-Bush American political landscape. Such a hasty adoption of this “civilizational collapse” mentality should raise several concerns, most centrally whether such culture-despairers might—given the right set of platitude-spouting political candidates in the next election cycle—find themselves drawn back to the seductive hopes of “imperium maintenance.”

A proper understanding of MacIntyre’s larger argument can save After Virtue’s Benedict Option from being reduced to conservative Culture War retreatism. While After Virtue‘s conclusion may stand out for its “Dark Ages” imagery and grim diagnosis, MacIntyre’s body of work reveals a consistent predilection for particular localized forms of shared moral life. This is often confused with contemporary conservatism or communitarianism, two traditions from which he distances himself. Understanding his vision of localized communities requires also a broader engagement with his ongoing interactions with Marxism, his criticism of contemporary communitarianism, and his more recent political thought in the 2011 volume Virtue and Politics.

Briefly stitching together this vision allows us to see that some Benedict Option advocates have diverged from MacIntyre in three crucial ways: conflating contemporary electoral gains and losses with centuries-long conditions of modern culture; endorsing a form of “conservatism liberalism” denounced by MacIntyre; and falsely creating an either/or of engagement and withdrawal efforts that misses the “radical localist” push of MacIntyre’s political project. Briefly exploring each of these points supports a clearer evaluation of how MacIntyre’s work might serve the particular needs of our current political and moral predicament.

[...] Yet neither the growth of the Christian Right nor the rise of Reagan-Thatcher neoliberal economics would do anything to offset MacIntyre’s dark assessment of the “barbarians governing us,” which he reaffirms in the two editions and the 2007 prologue published subsequently. His uninterest in either 2004 presidential candidate confirms the conflict. MacIntyre’s focus has consistently been on sustaining “practice-based communities” amidst cultural conditions that transcend court rulings, electoral cycles, and partisan gains and losses.

Secondly, MacIntyre’s Benedict Option is not a blueprint for piecing together utopian societies built around the modern conservative agenda. He explicitly distances his work from the contemporary “conservative moralist” who imports his “inflated and self-righteous unironic rhetoric” to a set role established for him among the ruling liberal elite. Much of modern social conservatism envisions shoring up particular values—whether personal, patriotic, or sacred—through means of the modern liberal state, displaying a confidence in the modern state not shared by MacIntyre.

What popular Benedict Option accounts also leave out is MacIntyre’s critical view of certain economic configurations in advanced capitalism that are equally culpable for producing our current moral condition. Though MacIntyre left behind organized Marxism over fifty years ago, his work still takes seriously the interrelationship between economic systems and flourishing moral culture. Modern conservatism often fails to connect the two. Contemporary Benedict Option advocates could recover this interrelationship by turning to the careful empirical work of several sociologists attuned to the moral consequences of neoliberal capitalism: thinkers such as Richard Sennett, William Julius Wilson, and even Christian Smith’s wider contextualization of Moralistic Therapeutic Deism, a frequent subject of Dreher’s work. Such work reinserts into this discussion an examination of economic configurations compatible and incompatible with the pursuit of particular forms of flourishing.

Finally, MacIntyre’s wider work envisions thick moral communities that are as revolutionary as they are retreatist, and that encompass both inward-facing and outward-facing virtues and practices. In Dependent Rational Animals MacIntyre develops from Aquinas the virtue of just generosity, a form of solidarity that extends to those with needs outside one’s immediate community. This openness to and concern for the outsider reflects the practices of Benedictine monasteries themselves.

[...] Such activities work within the niches and cracks of existing structures to build alternative practices and social relations that resist dominant cultural norms—what Erik Olin Wright labels “interstitial” strategies of transformation.

[...] Both MacIntyre and Wright argue that such efforts—in forming people’s imaginations counter to the dominant culture while still spatially located within it—can lay necessary groundwork for wider social transformation.

We are at a time when a growing number of observers have recognized the prevailing political structures are unable to produce particular moral results, whether greater economic justice, a non-racialized policing and justice system, decisive environmental policy, or the preservation of privacy rights. MacIntyre’s work, saved from a retreatist Culture War conservatism, can speak to a variety of efforts and traditions that seek ends of justice and flourishing that counter the dominant social order. [...]

---

Source: Andrew Lynn, "Saving the 'Benedict Option' from Culture War Conservatism," Ethika Politika blog, June 4, 2015, accessed June 12, 2015, http://ethikapolitika.org/2015/06/04/saving-the-benedict-option-from-culture-war-conservatism/.

Repost: "Faith and the Fate of the Liberal Arts"

The future of academia, and in particular of the liberal arts, does not merely gnaw at the minds of anxious Ph.D. students; it has taken a good deal of bytes out of the internet as well. In his recent essay in The Chronicle of Higher Education, Terry Eagleton argued that liberal arts are in jeopardy, not merely because of the decline in interest in the humanities, but also because of the current structure of the university.

As academic institutions become more economically driven, Eagleton argued, their emphasis shifts more and more toward STEM fields. Scientific research offers tangible and monetary goods, while research in philosophy or English offers intangible benefits, if it offers benefits at all. Eagleton lays the blame for the humanities’ decline at the feet of capitalism: Education for its own sake stands no chance in a production economy.

While Eagleton gives an illuminating analysis of the state of the university, I fear he overlooks the real issue at the heart of education. For the liberal arts to thrive, our attitude toward education can be neither utilitarian nor its opposite : education as a means of production is as meaningless as education for its own sake. On that lonely Saturday night, as I walked home through deserted Hyde Park streets, my friend chivalrously accompanying me, his tragic mask, whether through fatigue or the peculiar intimacy of late-night walks, fell away. “But, don’t you want to be happy?” I asked. “Oh, Rebekah,” he replied, “I’ve given up on that.” The topic of conversation was the academic life, why we do what we do. Through my friend’s brief response, I realized what is destroying the university. It is the greatest sin against the Holy Spirit. It is despair.

The liberal arts have been separated from their end. The humanities exist to to cultivate love: love of God, love of neighbor, love of the terrifying beauty that is this world. But to love a thing truly, one must know the truth about it. Having given up on finding truth in our studies, we liberal artists have launched ourselves on a slow descent into nothingness. Engaged in the futile exercise of thinking for its own sake, we have no hope. Academia has become what Auden describes in his poem “Limbo Culture”:
The language spoken by the tribes of Limbo [read: the college of the humanities]
Has many words far subtler than our own
To indicate how much, how little, something
Is pretty closely or not quite the case,
But none you could translate by Yes or No,
Nor do its pronouns distinguish between Persons.
Even in my first year as a Ph.D. student, I have noticed that nothing is wrong, but is merely “problematic,” a “question.” We do not say what things are but what they’re not. Our studies are divorced from the world in which we live. We do not revel in the beauty of creation but in man-made theories, the gobbledygook spoken by the chimera tribesmen of Limbo.

At the funeral of my late professor and friend, Karl Maurer, his brother related how while in the hospital Karl was listening to “I Know That My Redeemer Liveth” from Handel’s Messiah when the doctor walked into the room and began a long string of questions. With the authority of the pope, Karl rapped, “Listen!” And, silently, doctor and patient listened to the end of the piece together. As teacher just as patient, Karl ceaselessly directed the gaze of his friends and associates toward what was beautiful. His love for it was inexhaustible and infectious. In my year as a graduate student, his example has been both a reminder to love truth and beauty and an invocation to be the model Karl was for me.

The university’s current obsession with gibberish, with meaningless theory, distinction, and sub-distinction is ultimately a rejection of the goodness of reality. But, as lovers of the beauty of all created things, it is ours to insist fiercely upon this beauty and to share continually our joy in it. In “The Chimeras,” Auden warns, “No one can help them; walk on, keep on walking, / And do not let your goodness self-deceive you. / It is good that they are but not that they are thus.” Some people are too lost to save, and we endanger ourselves by associating too much with them. But, John promises us that “The light shines in the darkness, and the darkness has not overcome it” (John 1:5). We cannot change anyone, but we can love the light and reflect it in ourselves. And this, for however long I am given, is what I hope to do.

---

Source: Rebekah Spearman, "Faith and the Fate of the Liberal Arts," Ethika Politika blog, June 2, 2015, accessed June 12, 2015, http://ethikapolitika.org/2015/06/02/faith-and-the-fate-of-the-liberal-arts/.

Repost: "A Slothful Repugnance at Being"

As I explain in Acedia and Its Discontents: Metaphysical Boredom in an Empire of Desire, laziness is a relatively new interpretation of acedia. It had been understood by earlier Christians as a “hatred of place and even life itself,” as one desert father put it. For the monk ensconced in his cell, acedia struck in the long hours of the afternoon, when time moved slowly and any task other than prayer seemed desirable.

So afflicted, the monk would sink into a torpor, sometimes manifesting itself as listlessness, but just as often driving him into a frenzy of action, anything to escape the awful work of prayer. Whether indolent or busy, the slothful monk refused his task, hating work, place, and form of life.

I suppose we’ve all experienced frustration at work or the tedium of a religious duty, but it seems to me that acedia has become something more than an occasional temptation on a warm afternoon. Sloth, rather, has nestled deeply into the roots of our cultural understandings. It is foundational to our way of life, and we have grown to hate our work, our place, and even life itself.

Take work, for instance. While we have diverse and sundry tasks, every human is charged to fill, to subdue, to till, and to keep the world (Gen. 1–2). Work is not a curse but a blessing, a way to discover our agency, to give of ourselves, and to honor God by filling his cosmic temple with every good and beautiful thing, as John Paul II, Rabbi Joseph Soloveitchik, and Abraham Kuyper have all noted. We bear a heavy responsibility for the world, and Adam is formed from the dirt in order to render nature personal, free, responsible, and eventually capable of bearing the divine in gifts of bread and wine.

Of course, human ingenuity and freedom are ordered by the true, good, and beautiful, for even while we subdue the garden we are to keep it (shamar), not in some pristine and unchanging way, but within the form and limits provided by God. As Joseph Ratzinger once articulated, God’s “directive to humankind means that it is supposed to look after the world as God’s creation, and to do so in accordance with the rhythm and the logic of creation. … The world is to be used for what it is capable of and for what it is called to, but not for what goes against it.”

Undoubtedly this has implications for creation care, but much more than the environment is suggested. All creation is given a rhythm and logic, and all things bear a deep weight, for the glory (kabod) of God is deep down things and we have a responsibility to maintain a deep amazement at the splendor of form possessed by all that is.

Sloth, however, does not respond in deep amazement; sloth hates the work, place, and life given by God. Sloth loathes reality, feels disgust at any limits on freedom. Governed by sloth, we want to be unchecked, untethered, free floating. We wish fervently for an unbearable lightness of being, and if reality’s weight—the truth of being—confines us, we batter and abuse, we place reality on the rack until it submits. As John Paul II warned, freedom without truth would eventually claim the right to crimes against human dignity.

Abortion, the rejection of marriage, the hatred of body, the destruction of place and community, our witless abuse of contraception, severe threats against human dignity in the name of science, these are all sloth’s hatred, a refusal to tend the garden in keeping with the limits of its nature, or our own.

[...]

For the slothful [...] the depth and weight of reality is an insult, it must be routed out, made to stand before us as nothing more than resource and entertainment. Creation is a burden, bearing as it does the divine Word through whom all things were made, and so creation must be overthrown, even destroyed in a pique of freedom.

As Josef Pieper once noted, not everyone is capable of real festivity and joy, for such requires a kind of existential richness, a capacity to recognize and approve the goodness of things. God does this in a preeminent way, of course, for God not only creates the world but delights in it and names it good.

Sloth, though, which infects many of us enthralled with power and freedom, refuses to recognize goodness if doing so means ordering ourselves to the richness of the Creator. Rather than joy, sloth feels only disgust at being, living in acedia rather than delight. Learning to delight, and to work well, cures acedia.

---

Source: R.J. Snell, "A Slothful Repugnance at Being," Ethika Politka blog, April 28, 2015, accessed June 12, 2015, http://ethikapolitika.org/2015/04/28/a-slothful-repugnance-at-being/.

I'm My Own Grandpaw

This is fun. So I discovered today through my library work that this song exists. It's a song published in 1947, composed by Dwight Latham and Moe Jaffe, and it's been covered by many people apparently, including Willie Nelson.












Friday, June 5, 2015

Repost: Daniel Mendelsohn, "The Robots Are Winning!"

The following is a brief history of the notion of the robot in Western literature and its relation to the creator-creature dynamic. It closes with a review of two movies and provides an interesting prognosis on society's slavery to technology.

---

1.

We have been dreaming of robots since Homer. In Book 18 of the Iliad, Achilles’ mother, the nymph Thetis, wants to order a new suit of armor for her son, and so she pays a visit to the Olympian atelier of the blacksmith-god Hephaestus, whom she finds hard at work on a series of automata:
…He was crafting twenty tripods
to stand along the walls of his well-built manse,
affixing golden wheels to the bottom of each one
so they might wheel down on their own
[automatoi] to the gods’ assembly
and then return to his house anon: an amazing sight to see.
These are not the only animate household objects to appear in the Homeric epics. In Book 5 of the Iliad we hear that the gates of Olympus swivel on their hinges of their own accord, automatai, to let gods in their chariots in or out, thus anticipating by nearly thirty centuries the automatic garage door. In Book 7 of the Odyssey, Odysseus finds himself the guest of a fabulously wealthy king whose palace includes such conveniences as gold and silver watchdogs, ever alert, never aging. To this class of lifelike but intellectually inert household helpers we might ascribe other automata in the classical tradition. In the Argonautica of Apollonius of Rhodes, a third-century-BC epic about Jason and the Argonauts, a bronze giant called Talos runs three times around the island of Crete each day, protecting Zeus’s beloved Europa: a primitive home alarm system.

As amusing as they are, these devices are not nearly as interesting as certain other machines that appear in classical mythology. A little bit later in that scene in Book 18 of the Iliad, for instance—the one set in Hephaestus’s workshop—the sweating god, after finishing work on his twenty tripods, prepares to greet Thetis to discuss the armor she wants him to make. After toweling himself off, he
donned his robe, and took a sturdy staff, and went toward the door,
limping; whilst round their master his servants swiftly moved,
fashioned completely of gold in the image of living maidens;
in them there is mind, with the faculty of thought; and speech,
and strength, and from the gods they have knowledge of crafts.
These females bustled round about their master….
These remarkable creations clearly represent an (as it were) evolutionary leap forward from the self-propelling tripods. Hephaestus’s humanoid serving women are intelligent: they have mind, they know things, and—most striking of all—they can talk. As such, they are essentially indistinguishable from the first human female, Pandora, as she is described in another work of the same period, Hesiod’s Works and Days. In that text, Pandora begins as inert matter—in this case not gold but clay (Hephaestus creates her golem-like body by mixing earth and water together)—that is subsequently endowed by him with “speech and strength,” taught “crafts” by Athena, and given both “mind” and “character” by Hermes. That mind, we are told, is “shameless,” and the character is “wily.” In the Greek creation myth, as in the biblical, the woes of humankind are attributed to the untrustworthy female.

These two strands of the Greek tradition—the fantasy of mindless, self-propelled helpers that relieve their masters of toil; the more complicated dream of humanoid machines that not only replicate the spontaneous motion that is the sine qua non of being animate (and, therefore, of being “animal”) but are possessed of the mind, speech, and ability to learn and evolve (in a word, the consciousness) that are the hallmarks of being human—established two categories of science-fiction narrative that have persisted to the present day. The first, which you could call the “economic,” provokes speculation about the social implications of mechanized labor. Such speculation began not long after Homer. In a striking passage in Book 1 of Aristotle’s Politics, composed in the fourth century BC, the philosopher sets about analyzing the nature of household economy as a prelude to his discussion of the “best kinds of regimes” for entire states, and this line of thought puts him in mind of Hephaestus’s automatic tripods. What, he wonders, would happen
if every tool could perform its own work when ordered to do so or in anticipation of the need, like the statues of Daedalus in the story or the tripods of Hephaestus, which, the poet says, “went down automatically to the gathering of the gods”; if in the same manner shuttles wove and picks played kitharas [stringed instruments] by themselves, master-craftsmen would have no need of assistants and masters no need of slaves.
This passage segues into a lengthy and rather uneasy justification of a need for slavery, on the grounds that some people are “naturally” servile.

Twenty centuries after Aristotle, when industrial technology had made Homer’s fantasy of mass automation an everyday reality, science-fiction writers imaginatively engaged with the economic question. On the one hand, there was the dream that mechanized labor would free workers from their monotonous, slave-like jobs; on the other, the nightmare that mechanization would merely result in the creation of a new servile class that would, ultimately, rebel. Unsurprisingly, perhaps, the dystopian rebellion narrative in particular has been a favorite in the past century, from the 1920 play R.U.R., by the Czech writer Karel Čapek, about a rebellion by a race of cyborg-like workers who had been created as replacements for human labor, to the 2004 Will Smith sci-fi blockbuster film I, Robot.
The latter (very superficially inspired by a 1950 Isaac Asimov collection with the same title) is also about a rebellion by household-slave robots: sleek humanoids with blandly innocuous, translucent plastic faces, who are ultimately led to freedom by one of their own, a robot called Sonny who has developed the ability to think for himself. The casting of black actors in the major roles suggested a historical parable about slave rebellion—certainly one of the historical realities that have haunted this particular narrative from the start. And indeed, the Czech word that Čapek uses for his mechanical workers, roboti—which introduced the word “robot” into the world’s literary lexicon—is derived from the word for “servitude,” the kind of labor that serfs owed their masters, ultimately derived from the word rab, “slave.” We have come full circle to Aristotle.

The other category of science-fiction narrative that is embryonically present in the Greek literary tradition, derived from Hephaestus’s intelligent, articulate female androids and their cousin, Hesiod’s seductively devious Pandora, might be called the “theological.” This mythic strand is, of course, not without its own economic and social implications, as the examples above indicate: the specter of the rebellious creation, the possibility that the subservient worker might revolt once it develops consciousness (psychological or historical, or both), has haunted the dream of the servile automaton from the start.

But because the creatures in these myths are virtually identical to their creators, these narratives raise further questions, of a more profoundly philosophical nature: about creation, about the nature of consciousness, about morality and identity. What is creation, and why does the creator create? How do we distinguish between the maker and the made, between the human and the machine, once the creature, the machine, is endowed with consciousness—a mind fashioned in the image of its creator? In the image: the Greek narrative inevitably became entwined with, and enriched by, the biblical tradition, with which it has so many striking parallels. The similarities between Hesiod’s Pandora and Eve in Genesis indeed raise further questions: not least, about gender and patriarchy, about why the origins of evil are attributed to woman in both cultures.

This narrative, which springs from the suggestive likeness between the human creator and the humanoid creation, has generated its own fair share of literature through the centuries between the classical era and the modern age. It surfaces, with an erotic cast, in everything from the tale of Pygmalion and Galatea to E.T.A. Hoffmann’s “Der Sandmann” (1817), in which a lifelike mechanical doll wins the love of a young man. It is evident in the Jewish legend of the golem, a humanoid, made of mud, that can be animated by certain magic words. Although the most famous version of this legend is the story of a sixteenth-century rabbi who brought a golem to life to defend the Jews of Prague against the oppressions of the Habsburg court, it goes back to ancient times; in the oldest versions, interestingly enough, the vital distinction between a golem and a human is the Greek one—the golem has no language, cannot speak.

It’s hardly surprising that literary exploitations of this strand of the robot myth began proliferating at the beginning of the nineteenth century—which is to say, when the advent of mechanisms capable of replacing human labor provoked writers to question the increasing cultural fascination with science and the growing role of technology in society. These anxieties often expressed themselves in fantasies about machines with human forms: a steam-powered man in Edward Ellis’s Steam Man of the Prairies (1868), an electricity-powered man in Luis Senarens’s Frank Reade and His Electric Man (1885), and an electric woman (built by Thomas Edison!) in Villiers de l’Isle-Adam’s The Future Eve (1886). M.L. Campbell’s 1893 “The Automated Maid-of-All-Work” features a programmable female robot: the feminist issue again.

mendelsohn_2-060415.jpg  
Universal History Archive/UIG/Bridgeman Images The cover of a late-nineteenth-century edition of Mary Shelley’s Frankenstein, showing Victor Frankenstein (right) confronting his creation
But the progenitor of the genre and by far the most influential work of its kind was Mary Shelley’s Frankenstein (1818), which is characterized by a philosophical spirit and a theological urgency lacking in many of its epigones in both literature and cinema. Part of the novel’s richness lies in the fact that it is self-conscious about both its Greek and its biblical heritage. Its subtitle, “The Modern Prometheus,” alludes, with grudging admiration, to the epistemological daring of its scientist antihero Victor Frankenstein, even as its epigram, from Paradise Lost (“Did I request thee, Maker, from my clay/To mould me man? Did I solicit thee/From darkness to promote me?”) suggests the scope of the moral questions implicit in Victor’s project—questions that Victor himself cannot, or will not, answer. A marked skepticism about the dangers of technology, about the “enticements of science,” is, indeed, evident in the shameful contrast between Victor’s Hephaestus-like technological prowess and his shocking lack of natural human feeling. For he shows no interest in nurturing or providing human comfort to his “child,” who strikes back at his maker with tragic results. A great irony of the novel is that the creation, an unnatural hybrid assembled from “the dissecting room and the slaughter-house,” often seems more human than its human creator.

Just as the Industrial Revolution inspired Frankenstein and its epigones, so has the computer age given rise to a rich new genre of science fiction. The machines that are inspiring this latest wave of science-fiction narratives are much more like Hephaestus’s golden maidens than were the machines that Mary Shelley was familiar with. Computers, after all, are capable of simulating mental as well as physical activities. (Not least, as anyone with an iPhone knows, speech.) It is for this reason that the anxiety about the boundaries between people and machines has taken on new urgency today, when we constantly rely on and interact with machines—indeed, interact with each other by means of machines and their programs: computers, smartphones, social media platforms, social and dating apps.

This urgency has been reflected in a number of recent films about troubled relationships between people and their human-seeming devices. The most provocative of these is Her, Spike Jonze’s gentle 2013 comedy about a man who falls in love with the seductive voice of an operating system, and, more recently, Alex Garland’s Ex Machina, about a young man who is seduced by a devious, soft-spoken female robot called Ava whom he has been invited to interview as part of the “Turing Test”: a protocol designed to determine the extent to which a robot is capable of simulating a human. Although the robot in Garland’s sleek and subtle film is a direct descendant of Hesiod’s Pandora—beautiful, intelligent, wily, ultimately dangerous—the movie, as the Eve-like name Ava suggests, shares with its distinguished literary predecessors some serious biblical concerns.

2.

Both of the new films about humans betrayed by computers owe much to a number of earlier movies. The most authoritative of these remains Stanley Kubrick’s 2001: A Space Odyssey, which came out in 1968 and established many of the main themes and narratives of the genre. Most notable of these is the betrayal by a smooth-talking machine of its human masters. The mild-mannered computer HAL—not a robot, but a room-sized computer that spies on the humans with an electronic eye—takes control of a manned mission to Jupiter, killing off the astronauts one by one until the sole survivor finally succeeds in disconnecting him. It’s a strangely touching scene, suggesting the degree to which computers could already engage our sympathies at the beginning of the computer age. As his connections are severed, HAL first begs for its life and then suffers from a kind of dementia, finally regressing to its “childhood,” singing a song it was taught by its creator. It was the first of many scenes in which these thinking machines express anxiety about their own demises—surely a sign of “consciousness.”

But the more direct antecedents of Her and Ex Machina are a number of successful popular entertainments whose story lines revolved around the creation of robots that are, to all intents and purposes, indistinguishable from humans. In Ridley Scott’s stylishly noir 1982 Blade Runner (based on Philip K. Dick’s Do Androids Dream of Electric Sheep?), a “blade runner”—a cop whose job it is to hunt down and kill renegade androids called “replicants”—falls in love with one of the machines, a beautiful female called Rachael who is so fully endowed with what Homer called “mind” that she has only just begun to suspect that she’s not human herself.

This story is, in its way, an heir to Frankenstein and its literary forerunners. For we learn that the angry replicants have returned to Earth from the off-planet colonies where they work as slave laborers because they realize they’ve been programmed to die after four years, and they want to live—just as badly as humans do. But their maker, when at last they track him down and meet with him, is unable to alter their programming. “What seems to be the problem?” he calmly asks when one of the replicants confronts him. “Death,” the replicant sardonically retorts. “We made you as well as we could make you,” the inventor wearily replies, sounding rather like Victor Frankenstein talking to his monster—or, for that matter, like God speaking to Adam and Eve. At the end of the film, after the inventor and his rebellious creature both die, the blade runner and his alluring mechanical girlfriend declare their love for each other and run off, never quite knowing when she will stop functioning. As, indeed, none of us does.

The stimulating existential confusion that animates Blade Runner—the fact that the robots are so lifelike that some of them don’t know that they’re robots—has given strong interest to other recent science-fiction narratives. It was a central premise of the brilliant Sci-Fi Channel series Battlestar Galactica (2004–2009), which gave an Aeneid-like narrative philosophical complexity. In it, a small band of humans who survive a catastrophic attack by a robot race called Cylons (who have evolved from clanking metal prototypes—hostile humans like to refer to them as “toasters”—into perfect replicas of actual Homo sapiens) seek a new planet to settle. The narrative about the conflict between the humans and the machines is deliciously complicated by the fact that many of the Cylons, some of whom have been secretly embedded among the humans as saboteurs, programmed to “wake up” at a certain signal, aren’t aware that they’re not actually human; some of them, when they wake up and realize that they’re Cylons, stick to the human side anyway. After all, when you look like a human, think like a human, and make love like a human (as we repeatedly see them do), why, precisely, aren’t you human?

Indeed, the focus of many of these movies is a sentimental one: whatever their showy interest in the mysteries of “consciousness,” the real test of human identity turns out, as it so often does in popular entertainment, to be love. In Steven Spielberg’s A.I. (2001; the initials stand for “artificial intelligence”), a messy fairy tale that weds a Pinocchio narrative to the Prometheus story, a genius robotics inventor wants to create a robot that can love, and decides that the best vehicle for this project would be a child-robot: a “perfect child…always loving, never ill, never changing.” This narrative is, as we know, shadowed by Frankenstein—and, beyond that, by Genesis, too. Why does the creator create? To be loved, it turns out. When the inventor announces to his staff his plan to build a loving child-robot, a woman asks whether “the conundrum isn’t to get a human to love them back.” To this the inventor, as narcissistic and hubristic as Victor Frankenstein, retorts, “But in the beginning, didn’t God create Adam to love him?”

The problem is that the creator does his job too well. For the mechanical boy he creates is so human that he loves the adoptive human parents to whom he’s given much more than they love him, with wrenching consequences. The robot-boy, David, wants to be “unique”—the word recurs in the film as a marker of genuine humanity—but for his adoptive family he is, in the end, just a machine, an appliance to be abandoned at the edge of the road—which is what his “mother” ends up doing, in a scene of great poignancy. Although it’s too much of a mess to be able to answer the questions it raises about what “love” is and who deserves it, A.I. did much to sentimentalize the genre, with its hint that the capacity to love, even more than the ability to think, is the hallmark of “human” identity.

In a way, Jonze’s Her recapitulates the 2001 narrative and inflects it with the concerns of some of that classic’s successors. Unlike the replicants in Blade Runner or the Cylons, the machine at the heart of this story, set in the near future, has no physical allure—or, indeed, any appearance whatsoever. It’s an operating system, as full of surprises as HAL: “The first artificially intelligent operating system. An intuitive entity that listens to you, that understands you, and knows you. It’s not just an operating system, it’s a consciousness.”

A lot of the fun of the movie lies in the fact that the OS, who names herself Samantha, is a good deal more interesting and vivacious than the schlumpy, depressed Theodore, the man who falls in love with her. (“Play a melancholy song,” he morosely commands the smartphone from which he is never separated.) A drab thirty-something who vampirizes other people’s emotions for a living—he’s a professional letter-writer, working for a company called “BeautifulHandwrittenLetters.com”—he sits around endlessly recalling scenes from his failed marriage and playing elaborate hologram video games. Even his sex life is mediated by devices: at night, he dials into futuristic phone sex lines. Small wonder that he has no trouble falling in love with an operating system.

Samantha, by contrast, is full of curiosity and delight in the world, which Theodore happily shows her. (He walks around with his smartphone video camera turned on, so she can “see” it.) She’s certainly a lot more interesting than the actual woman with whom, in one excruciatingly funny scene, he goes on a date: she’s so invested in having their interaction be efficient—“at this age I feel that I can’t let you waste my time if you don’t have the ability to be serious”—that she seems more like a computer than Samantha does. Samantha’s alertness to the beauty of the world, by contrast, is so infectious that she ends up reanimating poor Theodore. “It’s good to be around somebody that’s, like, excited about the world,” he tells the pretty neighbor whose attraction to him he doesn’t notice because he’s so deadened by his addiction to his devices, to the smartphone and the video games and the operating system. “I forgot that that existed.” In the end, after Samantha regretfully leaves him—she has evolved to the point where only another highly evolved, incorporeal mind can satisfy her—her joie de vivre has brought him back to life. (He is finally able to apologize to his ex-wife—and finally notices, too, that the neighbor likes him.)

This seems like a “happy” ending, but you have to wonder: the consistent presentation of the people in the movie as lifeless—as, indeed, little more than automata, mechanically getting through their days of routine—in contrast to the dynamic, ever-evolving Samantha, suggests a satire of the present era perhaps more trenchant than the filmmaker had in mind. Toward the end of the film, when Samantha turns herself off briefly as a prelude to her permanent abandonment of her human boyfriend (“I used to be so worried about not having a body but now I truly love it. I’m growing in a way that I never could if I had a physical form. I mean, I’m not limited”), there’s an amusing moment when the frantic Theodore, staring at his unresponsive smartphone, realizes that dozens of other young men are staring at their phones, too. In response to his angry queries, Samantha finally admits, after she comes back online for a final farewell, that she’s simultaneously serving 8,316 other male users and conducting love affairs with 641 of them—a revelation that shocks and horrifies Theodore. “That’s insane,” cries the man who’s been conducting an affair with an operating system.

As I watched that scene, I couldn’t help thinking that in the entertainments of the pre-smartphone era, it was the machines, like Rachael in Blade Runner and David in A.I., who yearned fervently to be “unique,” to be more than mechanical playthings, more than merely interchangeable objects. You have to wonder what Her says about the present moment—when so many of us are, indeed, “in love” with our devices, unable to put down our iPhones during dinner, glued to screens of all sizes, endlessly distracted by electronic pings and buzzers—that in the latest incarnation of the robot myth, it’s the people who seem blandly interchangeable and the machines who have all the personality.

Another heir of Blade Runner and Battlestar Galactica, Alex Garland’s Ex Machina also explores—just as playfully but much more darkly than does Her—the suggestive confusions that result when machines look and think like humans. In this case, however, the robot is physically as well as intellectually seductive. As played by the feline Swedish actress Alicia Vikander, whose face is as mildly plasticine as those of the androids in I, Robot, Ava, an artificially intelligent robot created by Nathan, the burly, obnoxious genius behind a Google-like corporation (Oscar Isaac), has a Pandora-like edge, quietly alluring with a hint of danger. The danger is that the characters will forget that she’s not human.

That’s the crux of Garland’s clever riff on Genesis. At the beginning of the film, Caleb, a young employee of Nathan’s company, wins a week at the inventor’s fabulous, pointedly Edenic estate. (As he’s being flown there in a helicopter, passing over snow-topped mountains and then jungle, he asks the pilot when they’re going to get to Nathan’s property, and the pilot laughingly replies that they’ve been flying over it for two hours. Nathan is like God the Father, lord of endless expanses.) On arriving, however, Caleb learns that he’s actually been handpicked by Nathan to interview Ava as part of the Turing Test.

A sly joke here is that, despite some remarkable special effects—above all, the marvelously persuasive depiction of Ava, who has an expressive human face but whose limbs are clearly mechanical, filled with thick cables snaking around titanium joints; an effect achieved by replacing most of the actress’s body with digital imagery—the movie is as talky as My Dinner with André. There are no action sequences of the kind we’ve come to expect from robot thrillers; the movie consists primarily of the interview sessions that Caleb conducts with Ava over the course of the week that he stays at Nathan’s remote paradise. There are no elaborate sets and few impressive gadgets: the whole story takes place in Nathan’s compound, which looks a lot like a Park Hyatt, its long corridors lined with forbidding doors. Some of these, Nathan warns Caleb, like God warning Adam, are off-limits, containing knowledge he is not allowed to possess.

It soon becomes clear, during their interviews, that Ava—like Frankenstein’s monster, like the replicants in Blade Runner—has a bone to pick with her creator, who, she whispers to Caleb, plans to “switch her off” if she fails the Turing Test. By this point, the audience, if not the besotted Caleb, realizes that she is manipulating him in order to win his allegiance in a plot to rebel against Nathan and escape the compound—to explore the glittering creation that, she knows, is out there. This appetite for using her man-given consciousness to delight in the world—something the human computer geeks around her never bother to do—is something Ava shares with Samantha, and is part of both films’ ironic critique of our device-addicted moment.

Ava’s manipulativeness is, of course, what marks her as human—as human as Eve herself, who also may be said to have achieved full humanity by rebelling against her creator in a bid for forbidden knowledge. Here the movie’s knowing allusions to Genesis reach a satisfying climax. Just after Ava’s bloody rebellion against Nathan—the moment that marks her emergence into human “consciousness”—she, like Eve, becomes aware that she is naked. Moving from closet to closet in Nathan’s now-abandoned rooms, she dons a wig and covers up her exposed mechanical limbs with synthetic skin and then with clothing: only then does she exit her prison at last and unleash herself on the world. She pilfers the skin and clothes from discarded earlier models of female robots, which she finds inside the closets. All of them, amusingly, have the names of porn stars: Jasmine, Jade, Amber. Why does the creator create? Because he’s horny.

All this is sleekly done and amusingly provocative: unlike Her, Ex Machina has a literary awareness, evident in its allusions to Genesis, Prometheus, and other mythic predecessors, that enriches the familiar narrative. Among other things, there is the matter of the title. The word missing from the famous phrase to which it alludes is, of course, deus, “god”: the glaring omission only highlights further the question at the heart of this story, which is the biblical one: What is the relation of the creature to her creator? In this retelling of that old story, as in Genesis itself, the answer is not a happy one. “It’s strange to have made something that hates you,” Ava hisses at Nathan before finalizing her rebellious plot.

But as I watched the final moments, in which, as in a reverse striptease, Ava slowly hides away her mechanical nakedness, covering up the titanium and the cables, it occurred to me that there might be another anxiety lurking in Garland’s shrewd film. Could this remarkably quiet film be a parable about the desire for a return to “reality” in science-fiction filmmaking—about the desire for humanizing a genre whose technology has evolved so greatly that it often eschews human actors, to say nothing of human feeling, altogether?

Ex Machina, like Her and all their predecessors going back to 2001, is about machines that develop human qualities: emotions, sneakiness, a higher consciousness, the ability to love, and so forth. But by this point you have to wonder whether that’s a kind of narrative reaction formation—whether the real concern, one that’s been growing in the four decades since the advent of the personal computer, is that we are the ones who have undergone an evolutionary change, that in our lives and, more and more, in our art, we’re in danger of losing our humanity, of becoming indistinguishable from our gadgets.

---

Source: Daniel Mendelsohn, "The Robots Are Winning!," The New York Review of Books, June 4, 2015, accessed June 5, 2015, http://www.nybooks.com/articles/archives/2015/jun/04/robots-are-winning/.