Phenomenological question for the day

Will robots ever be able to lose — misplace — things?

I suspect that the answer is “no.”

But then: will robots ever really be able to find things?

Isn’t it precisely the lost thing that we find?

Isn’t it the case that losing is a capability, rather than a program dysfunction?

And that on this capability, the countervailing capability of finding is predicated?

(Alternative post title: After making and eating a mediocre sandwich I am unable for some time to find the mayonnaise lid until suddenly seeing it where I had placed it, upside down, in dim light on an inverted bowl of the same colour.)

The AV experience

There’s a roboticist at Columbia called Hod Lipson who has literally written the book on autonomous vehicles (AVs). Recently I came across an interview with him from a couple of years ago.

“No human driver,” Lipson states, “can have more than one lifetime of driving experience. A car that is part of an AI network can, within a year, have a thousand lifetimes of experience … these cars will drive better than any human has ever driven. They will have experienced every possible situation.”

Question: can a car experience?

Clearly not. Experience is the form of our encounter with the world. Cars don’t have a world to encounter. They do not have experiences.

By the same token: cars can’t become experienced. The situations they traverse don’t make them better cars. They just make them old.

Lipson might respond: “ok, sure, but it’s not just the car. It’s the computer-in-the-car.”

Can a computer experience?

Again, clearly not, and for exactly the same reasons.

Supposing that a combination of non-experiencing beings will lead to an experiencing being is like supposing that a top hat on a snowman will make him dance and sing.

It seems to me that what’s at stake here is the fundamental question of how, or whether, an AI system can replicate, or mimic, our encounter with the world.

Lipson suggests that cars can have experiences because he hasn’t even thought about it.

I think that’s pretty dumb.

 

Das Fahren des Anderen

I have a neighbor who always drives much too quickly down the lane behind my house. Where kids play, people walk, etc.

I think he’s an asshole.

Now, I’ve just been reading the special report in this week’s Economist about Autonomous Vehicles (AVs). Basically, the report says: they’re near, they steer, get used to it. One claim that the E pushes very hard on this file is the utilitarian one. AVs are predicted to be, overall, much safer than cars. (Driven by humans. For short, cars.) For AVs will be programmed, we are told, *not to be able to do* the stupid things so many of us prefer when we get behind the wheel.

My question: my Ahole neighbor likes to drive too fast down the lane.

He has paid very good money for a sweet German SUV to do this in.

Is he really going to be satisfied, renting or owning, with a transportation package that does *not* support the function “faster down the lane”?

On the other hand: If the AV packages supposedly coming down the pipe will actually support this function–what’s the fucking point?

 

%d bloggers like this: