How I stopped worrying and learned to be bored by AI discourse

I’m thinking about maps and artificial intelligence.

Not the way you think. I’m not talking about augmented reality or the return of Google Glass or geek fantasies of robot drone guides.

I’m talking about a good old-fashioned paper map.

Suppose you are lost in a strange city. An experience that ranges from disorienting to terrifying. (If you’ve had it, you know.) You are completely at the mercy of your surroundings.

Then somebody hands you a map. Let it be as crude as possible–small, detail-poor, torn. Nonetheless, you can suddenly orient yourself. Act like you know this place, to some extent. Find your way around.

What has happened here? Clearly, an encounter with information technology. An encounter, that is, with information as technology–that tool, that object, which you have attached to your being. The tool becomes, for the moment, the very vector of your being. Person-with-map is a cyborg. But that’s precisely how s/he attains the requisite functionality as a person.

And more than that. By acquiring the map, and starting to use it, you have acquired analytic abilities that were not yours before. You know where that street leads. Which way to the hospital. How to find a hotel. And so on. Mapless people, also lost in this city, can cling to you.

You have become smarter. Artificially.

Two points. One, the phenomenon of information is only ever encountered technologically–however creased, crude, or basic the tool in the encounter may be. “Information technology,” I guess, is itself becoming an old-fashioned phrase, and good riddance. It’s redundant. Technology is not all informational. But all information is technological.

And two: information technology is always artificial intelligence–again, no matter how simple or ungeeky the informational tool. This is why, I think, the horizon of the Singularity keeps receding. It’s not a transformation in our relationship to information. It is our relationship to information.

But it is we, not the map, who become AI.

Das Fahren des Anderen

I have a neighbor who always drives much too quickly down the lane behind my house. Where kids play, people walk, etc.

I think he’s an asshole.

Now, I’ve just been reading the special report in this week’s Economist about Autonomous Vehicles (AVs). Basically, the report says: they’re near, they steer, get used to it. One claim that the E pushes very hard on this file is the utilitarian one. AVs are predicted to be, overall, much safer than cars. (Driven by humans. For short, cars.) For AVs will be programmed, we are told, *not to be able to do* the stupid things so many of us prefer when we get behind the wheel.

My question: my Ahole neighbor likes to drive too fast down the lane.

He has paid very good money for a sweet German SUV to do this in.

Is he really going to be satisfied, renting or owning, with a transportation package that does *not* support the function “faster down the lane”?

On the other hand: If the AV packages supposedly coming down the pipe will actually support this function–what’s the fucking point?

 

No, robots will never win the World Cup

A tidy little BBC reiteration of the AI/robotics first-step fallacy, identified as such by the late Hubert Dreyfus in *1972*. And he was able to call it then, as I have argued in The Mirror of Information, because it was already old news at that time–going back to the very origins of modern computing! No, geeks, having taken the first step does not mean that the rest will, or even can, follow! Otherwise, as Dreyfus put it, the first person to climb a tree could claim to have made tangible progress toward reaching the moon!

The continuity and consistency of this fantasy technological discourse, since the early 1950s, are absolutely astonishing. Revealing, of course, a phenomenological commitment; having nothing directly to do with technological boundary-conditions at all.

QED

We’re all aware of the current hysteria around “AI” (which resolutely continues to fail to exist), robotics, etc. Along those lines, I’ve just been reading a serious discussion about how to make education “robot-proof.” It involves an interlocking set of supposedly “new” skills, having to do with peer interaction, data-analysis, etc., all adding up to something called “humanics” (yes, really)–which, as far as I can tell, is pretty much ideally designed to be mastered by robots.

The thing is: We already know what kind of learning is robot-proof. It is, as Gadamer would say, seeing what is questionable–or perhaps, in a more literal translation of fragwürdig, question-worthy, worthy of becoming an occasion for posing a question. Broaden this hermeneutic insight just slightly, and you get the utterly ordinary and yet totally profound challenge of (a) noticing what’s interesting and (b) trying to say why it’s interesting.

This is what I do, in the literary classroom, all the way down to the first-year level. In fact, especially there. I give my 200 or so 18-yr-olds, their heads buzzing with Civ 6 and XBox N, their mouths full of clichés about machine learning and Big Dayda, a poem. And I tell them to tell me something interesting about it.

THEY HATE THIS. But they kind of like it too. And for good reason, on both scores.

On the territory of their own hermeneutic consciousness, there neither are, have been, nor ever will be, any robots.