Phenomenological question for the day

Will robots ever be able to lose — misplace — things?

I suspect that the answer is “no.”

But then: will robots ever really be able to find things?

Isn’t it precisely the lost thing that we find?

Isn’t it the case that losing is a capability, rather than a program dysfunction?

And that on this capability, the countervailing capability of finding is predicated?

(Alternative post title: After making and eating a mediocre sandwich I am unable for some time to find the mayonnaise lid until suddenly seeing it where I had placed it, upside down, in dim light on an inverted bowl of the same colour.)

The AV experience

There’s a roboticist at Columbia called Hod Lipson who has literally written the book on autonomous vehicles (AVs). Recently I came across an interview with him from a couple of years ago.

“No human driver,” Lipson states, “can have more than one lifetime of driving experience. A car that is part of an AI network can, within a year, have a thousand lifetimes of experience … these cars will drive better than any human has ever driven. They will have experienced every possible situation.”

Question: can a car experience?

Clearly not. Experience is the form of our encounter with the world. Cars don’t have a world to encounter. They do not have experiences.

By the same token: cars can’t become experienced. The situations they traverse don’t make them better cars. They just make them old.

Lipson might respond: “ok, sure, but it’s not just the car. It’s the computer-in-the-car.”

Can a computer experience?

Again, clearly not, and for exactly the same reasons.

Supposing that a combination of non-experiencing beings will lead to an experiencing being is like supposing that a top hat on a snowman will make him dance and sing.

It seems to me that what’s at stake here is the fundamental question of how, or whether, an AI system can replicate, or mimic, our encounter with the world.

Lipson suggests that cars can have experiences because he hasn’t even thought about it.

I think that’s pretty dumb.

 

Proposal for a first-year lit course: Small Data

Big data, as we all know, is what it’s all about. As the CEO of the AI company ImageNet has put it: “Data drives learning.”

Except it really, truly does not.

Consider a rock on the beach. It’s surrounded by data: from the local ecosystems, to the weather patterns on the horizon, to the stars that come out at night.

But that rock will never learn a thing. 

Data doesn’t drive learning. Learning drives data. The capacity to learn—interpret, and understand—determines what even counts as data. 

That’s where literature comes in. It’s just some marks on a page. But literature is what happens when some of the those marks, strangely, start to matter. 

Since very ancient times, writers have been attracted to exactly this kind of moment: when we suddenly see where the data are headed. Even—the singular—a datum. 

So, in this course, we will read and comment on some classic (and, mostly, very old) works of small data. Texts that do a lot with a little. Poems, lines, even single words that demand our attention. Plays and stories about the necessity of noticing, the challenge of interpreting, and the detail that changes everything.

How I stopped worrying and learned to be bored by AI discourse

I’m thinking about maps and artificial intelligence.

Not the way you think. I’m not talking about augmented reality or the return of Google Glass or geek fantasies of robot drone guides.

I’m talking about a good old-fashioned paper map.

Suppose you are lost in a strange city. An experience that ranges from disorienting to terrifying. (If you’ve had it, you know.) You are completely at the mercy of your surroundings.

Then somebody hands you a map. Let it be as crude as possible–small, detail-poor, torn. Nonetheless, you can suddenly orient yourself. Act like you know this place, to some extent. Find your way around.

What has happened here? Clearly, an encounter with information technology. An encounter, that is, with information as technology–that tool, that object, which you have attached to your being. The tool becomes, for the moment, the very vector of your being. Person-with-map is a cyborg. But that’s precisely how s/he attains the requisite functionality as a person.

And more than that. By acquiring the map, and starting to use it, you have acquired analytic abilities that were not yours before. You know where that street leads. Which way to the hospital. How to find a hotel. And so on. Mapless people, also lost in this city, can cling to you.

You have become smarter. Artificially.

Two points. One, the phenomenon of information is only ever encountered technologically–however creased, crude, or basic the tool in the encounter may be. “Information technology,” I guess, is itself becoming an old-fashioned phrase, and good riddance. It’s redundant. Technology is not all informational. But all information is technological.

And two: information technology is always artificial intelligence–again, no matter how simple or ungeeky the informational tool. This is why, I think, the horizon of the Singularity keeps receding. It’s not a transformation in our relationship to information. It is our relationship to information.

But it is we, not the map, who become AI.

Das Fahren des Anderen

I have a neighbor who always drives much too quickly down the lane behind my house. Where kids play, people walk, etc.

I think he’s an asshole.

Now, I’ve just been reading the special report in this week’s Economist about Autonomous Vehicles (AVs). Basically, the report says: they’re near, they steer, get used to it. One claim that the E pushes very hard on this file is the utilitarian one. AVs are predicted to be, overall, much safer than cars. (Driven by humans. For short, cars.) For AVs will be programmed, we are told, *not to be able to do* the stupid things so many of us prefer when we get behind the wheel.

My question: my Ahole neighbor likes to drive too fast down the lane.

He has paid very good money for a sweet German SUV to do this in.

Is he really going to be satisfied, renting or owning, with a transportation package that does *not* support the function “faster down the lane”?

On the other hand: If the AV packages supposedly coming down the pipe will actually support this function–what’s the fucking point?

 

No, robots will never win the World Cup

A tidy little BBC reiteration of the AI/robotics first-step fallacy, identified as such by the late Hubert Dreyfus in *1972*. And he was able to call it then, as I have argued in The Mirror of Information, because it was already old news at that time–going back to the very origins of modern computing! No, geeks, having taken the first step does not mean that the rest will, or even can, follow! Otherwise, as Dreyfus put it, the first person to climb a tree could claim to have made tangible progress toward reaching the moon!

The continuity and consistency of this fantasy technological discourse, since the early 1950s, are absolutely astonishing. Revealing, of course, a phenomenological commitment; having nothing directly to do with technological boundary-conditions at all.

QED

We’re all aware of the current hysteria around “AI” (which resolutely continues to fail to exist), robotics, etc. Along those lines, I’ve just been reading a serious discussion about how to make education “robot-proof.” It involves an interlocking set of supposedly “new” skills, having to do with peer interaction, data-analysis, etc., all adding up to something called “humanics” (yes, really)–which, as far as I can tell, is pretty much ideally designed to be mastered by robots.

The thing is: We already know what kind of learning is robot-proof. It is, as Gadamer would say, seeing what is questionable–or perhaps, in a more literal translation of fragwürdig, question-worthy, worthy of becoming an occasion for posing a question. Broaden this hermeneutic insight just slightly, and you get the utterly ordinary and yet totally profound challenge of (a) noticing what’s interesting and (b) trying to say why it’s interesting.

This is what I do, in the literary classroom, all the way down to the first-year level. In fact, especially there. I give my 200 or so 18-yr-olds, their heads buzzing with Civ 6 and XBox N, their mouths full of clichés about machine learning and Big Dayda, a poem. And I tell them to tell me something interesting about it.

THEY HATE THIS. But they kind of like it too. And for good reason, on both scores.

On the territory of their own hermeneutic consciousness, there neither are, have been, nor ever will be, any robots.

%d bloggers like this: