Never detechnologize

An idea I’ve been kicking around for the last few years — and we can call it, loosely, phenomenological — is that technological intervention into a given form of life projects the relevant pre-technological category as normative or natural. The pre-tech form, we think, precedes the technologized one; and it seems like we can escape or resist the latter by turning back to the former. But in fact, I think this is wrong on both counts. The pre-tech category follows from the technological intervention. Turning back to the former merely, and even pitilessly, re-asserts and re-enforces the latter.

Take the example of “live music.” We may revere and treasure this, as the pre-tech form of recorded music. And we may suppose that we are stepping outside the somewhat dehumanizing space of modernity when we go to take in some live music. But clearly: the very idea of “live” music totally depends on its recorded analog. Prior to recording, live is just what music is. Therefore, every time we talk up “live” music, as something special or pre-technological, we are proclaiming our allegiance to the technological intervention — recording — that allows the “pre-tech” form to be there.

Or consider the concept — beloved by lit profs — of orality. That is, spoken language, prior to, or outside, its written form. Anthropologically, it stands to reason (sorry, Derrida) that human beings spoke before they wrote. Accordingly, we get a phenomenological thrill when we turn back to, or feel like we can turn back to, oral literatures: In Homer, or in the West African bards, or in some of the pre-contact cultures of the Americas. But it is exactly like the point about live and recorded music: Only when there is literacy is there such a thing as orality. Until and unless the written word confronts the spoken one, spoken is just what a word is. Talking up “orality” does not take us one single step outside the circle of technological power that is literacy. Quite the contrary.

This is not to say that we have no reason to want phenomenological liberation. We have every reason to want that. It is to say, rather, that we are not liberated, in any field or form of life — literary, cultural, civic, or political — by fetishization of what we take to be pre-tech categories. For the latter are projected by, and lead back to, the very technology that is in question.

Where does this go? Lots of places, I think. But all will be governed by versions of the same insight. The way to liberate our consciousness is not to de-technologize. The way to liberate our consciousness is not to care.

 

Thanks, Mr. Jobs

An idea I have been kicking around for several years is that technology legitimately enriches our experience precisely by rendering itself aggressively normative. The absence or denial of the technology then becomes special, in a way that simply could not have obtained before.

So, for example: recorded music yields live. Electric light yields candle-lit. An umbrella yields just walking bareheaded in the rain. And so on.

And, as I realized this evening on the bus: constantly and obsessively poking at data-enabled smartphones yields–just quietly doing nothing! For quite some time!

I mean, as a choice and pleasure!

Phenomenological question for the day

Will robots ever be able to lose — misplace — things?

I suspect that the answer is “no.”

But then: will robots ever really be able to find things?

Isn’t it precisely the lost thing that we find?

Isn’t it the case that losing is a capability, rather than a program dysfunction?

And that on this capability, the countervailing capability of finding is predicated?

(Alternative post title: After making and eating a mediocre sandwich I am unable for some time to find the mayonnaise lid until suddenly seeing it where I had placed it, upside down, in dim light on an inverted bowl of the same colour.)

The AV experience

There’s a roboticist at Columbia called Hod Lipson who has literally written the book on autonomous vehicles (AVs). Recently I came across an interview with him from a couple of years ago.

“No human driver,” Lipson states, “can have more than one lifetime of driving experience. A car that is part of an AI network can, within a year, have a thousand lifetimes of experience … these cars will drive better than any human has ever driven. They will have experienced every possible situation.”

Question: can a car experience?

Clearly not. Experience is the form of our encounter with the world. Cars don’t have a world to encounter. They do not have experiences.

By the same token: cars can’t become experienced. The situations they traverse don’t make them better cars. They just make them old.

Lipson might respond: “ok, sure, but it’s not just the car. It’s the computer-in-the-car.”

Can a computer experience?

Again, clearly not, and for exactly the same reasons.

Supposing that a combination of non-experiencing beings will lead to an experiencing being is like supposing that a top hat on a snowman will make him dance and sing.

It seems to me that what’s at stake here is the fundamental question of how, or whether, an AI system can replicate, or mimic, our encounter with the world.

Lipson suggests that cars can have experiences because he hasn’t even thought about it.

I think that’s pretty dumb.

 

No, robots will never win the World Cup

A tidy little BBC reiteration of the AI/robotics first-step fallacy, identified as such by the late Hubert Dreyfus in *1972*. And he was able to call it then, as I have argued in The Mirror of Information, because it was already old news at that time–going back to the very origins of modern computing! No, geeks, having taken the first step does not mean that the rest will, or even can, follow! Otherwise, as Dreyfus put it, the first person to climb a tree could claim to have made tangible progress toward reaching the moon!

The continuity and consistency of this fantasy technological discourse, since the early 1950s, are absolutely astonishing. Revealing, of course, a phenomenological commitment; having nothing directly to do with technological boundary-conditions at all.

where everybody knows your name who cares

One goal of Google and the other IT hegemons, with their wearable interfaces and online glasses and enhanced realities and data-rich maps, is to make it so we are no longer alien anywhere. Thus Michael Jones of Google Maps, interviewed in The Atlantic recently, millenially and yet banally asserted that in the future, your phone or glasses or whatever will just automatically give you directions, no matter where you are, also enriching your every moment with information targetted to your exact spot and behaviorally-determined preferences — eg around which corner is to be found  a great bar, what to order, who else may be there, etc. Jones: “It’ll be like you’re a local everywhere you go.” Around every corner, be it in Beijing or Toronto or Timbuktu, you can find your Cheers.

But clearly: to be a local everywhere is to be a local nowhere. It is to lose all contact with the very notion of locality. The logic here is like the logic of being at home. Suppose that you are in your house. What are you doing? Well, among other things, you’re being at home. Now you look out the window, and see your neighbour inside his house. He is doing what you are doing: you both are being at home. But where he is at home, you would not be at home. Where you are at home, he would not be at home. Only insofar as there is not-being-at-home is there being-at-home. Only insofar as it is possible not to be a local does it matter in the slightest to be a local.

So the Google vision, which seems so light and wondrous, may be quite the opposite.

But nobody notices.

epistemological thought for the day

The very idea of a database seems to me epistemologically questionable. For the idea rests on — or, perhaps, projects — a supposition of data as finite.  For if data is infinite, then the idea of a collection of some data, presumably, is useless and stochastic. It would be like having a collection of some numbers (assuming numbers to be infinite). What’s the point? You might as well have just one — the one you are working with, when you’re trying to work out some problem. For one in the number, compared to infinity, is all you can ever have.

Only if data (in the last analysis) is finite can the idea of a database make any sense. Only if data is finite can the idea of an ever-larger database make sense. Therefore, the more committed one is to the idea of a database, the more committed one becomes to the presupposition that data is finite. In this way, every database, from the Domesday Book to Google, enforces upon us an epistemological assumption. The “bigger” the database, the more powerfully enforced is the assumption.

Do we know this assumption to be correct? Can we? I don’t know (perhaps there is an answer in theoretical physics) but I doubt it. And if not, isn’t it kind of problematic for us as a culture and society to be increasingly committed to the idea of the database? Aren’t we begging the question of what knowledge, fundamentally, is, or might be?

critical-phenomenological thought for the day

Understanding, Hans-Georg Gadamer teaches, is an event. It is an experience (Erfahrung) that we undergo: like the way a player experiences a moment in the game; or an audience member experiences the climax of a tragedy. Indeed, understanding is an experience of a very special kind, which precludes or overwhelms our front-of-mind consciousness. Gadamer points out that the player of a game, in the midst of playing it, knows in one sense “this is only a game.” Yet in another, larger, and more important sense – and this is the key point – s/he does not and cannot know that. For knowing “this is only a game” would preclude or impede effective involvement in the game. Analogously, Gadamer claims, when we are in the midst of understanding something, we know in one sense “I am currently understanding.” Yet in another, larger, and more important sense, we do not and cannot know that. For knowing, in a front-of-mind way, “I am currently understanding” would preclude or impede our being fully involved – lost, for the moment – in the understanding. And getting lost in this way is precisely part and parcel of the kind of experience that understanding is.

Now literary criticism, let’s say, is the study of texts as texts. Understanding is the fulfillment of any text. Therefore, literary criticism includes the study of understanding. (This means, for those who follow this sort of thing, that criticism subsumes hermeneutics.) To study anything is to try to understand it. Therefore, criticism has as one of its goals to understand understanding.

But if understanding is an experience, along the lines already described, then understanding understanding can only mean (1) trying to understand this very special experience and (2) doing so precisley by trying to have this experience. For there would appear to be no other way to do it. The literary-critical classroom, unlike the classrooms of other disciplines, where this or that object is examined, will be a classroom in which the experience of understanding itself is provoked, and for its own sake. The literary text, moreover, unlike texts of other kinds, will not try to present this or that object, but will try to make available the experience of understanding, just as such. Studying such a text, in such a critical mode, will be, if not the only, then probably the best, way to understand understanding.

So, like, that’s why we do it.

%d bloggers like this: