Thanks, Mr. Jobs

An idea I have been kicking around for several years is that technology legitimately enriches our experience precisely by rendering itself aggressively normative. The absence or denial of the technology then becomes special, in a way that simply could not have obtained before.

So, for example: recorded music yields live. Electric light yields candle-lit. An umbrella yields just walking bareheaded in the rain. And so on.

And, as I realized this evening on the bus: constantly and obsessively poking at data-enabled smartphones yields–just quietly doing nothing! For quite some time!

I mean, as a choice and pleasure!

Phenomenological question for the day

Will robots ever be able to lose — misplace — things?

I suspect that the answer is “no.”

But then: will robots ever really be able to find things?

Isn’t it precisely the lost thing that we find?

Isn’t it the case that losing is a capability, rather than a program dysfunction?

And that on this capability, the countervailing capability of finding is predicated?

(Alternative post title: After making and eating a mediocre sandwich I am unable for some time to find the mayonnaise lid until suddenly seeing it where I had placed it, upside down, in dim light on an inverted bowl of the same colour.)

The AV experience

There’s a roboticist at Columbia called Hod Lipson who has literally written the book on autonomous vehicles (AVs). Recently I came across an interview with him from a couple of years ago.

“No human driver,” Lipson states, “can have more than one lifetime of driving experience. A car that is part of an AI network can, within a year, have a thousand lifetimes of experience … these cars will drive better than any human has ever driven. They will have experienced every possible situation.”

Question: can a car experience?

Clearly not. Experience is the form of our encounter with the world. Cars don’t have a world to encounter. They do not have experiences.

By the same token: cars can’t become experienced. The situations they traverse don’t make them better cars. They just make them old.

Lipson might respond: “ok, sure, but it’s not just the car. It’s the computer-in-the-car.”

Can a computer experience?

Again, clearly not, and for exactly the same reasons.

Supposing that a combination of non-experiencing beings will lead to an experiencing being is like supposing that a top hat on a snowman will make him dance and sing.

It seems to me that what’s at stake here is the fundamental question of how, or whether, an AI system can replicate, or mimic, our encounter with the world.

Lipson suggests that cars can have experiences because he hasn’t even thought about it.

I think that’s pretty dumb.

 

No, robots will never win the World Cup

A tidy little BBC reiteration of the AI/robotics first-step fallacy, identified as such by the late Hubert Dreyfus in *1972*. And he was able to call it then, as I have argued in The Mirror of Information, because it was already old news at that time–going back to the very origins of modern computing! No, geeks, having taken the first step does not mean that the rest will, or even can, follow! Otherwise, as Dreyfus put it, the first person to climb a tree could claim to have made tangible progress toward reaching the moon!

The continuity and consistency of this fantasy technological discourse, since the early 1950s, are absolutely astonishing. Revealing, of course, a phenomenological commitment; having nothing directly to do with technological boundary-conditions at all.

where everybody knows your name who cares

One goal of Google and the other IT hegemons, with their wearable interfaces and online glasses and enhanced realities and data-rich maps, is to make it so we are no longer alien anywhere. Thus Michael Jones of Google Maps, interviewed in The Atlantic recently, millenially and yet banally asserted that in the future, your phone or glasses or whatever will just automatically give you directions, no matter where you are, also enriching your every moment with information targetted to your exact spot and behaviorally-determined preferences — eg around which corner is to be found  a great bar, what to order, who else may be there, etc. Jones: “It’ll be like you’re a local everywhere you go.” Around every corner, be it in Beijing or Toronto or Timbuktu, you can find your Cheers.

But clearly: to be a local everywhere is to be a local nowhere. It is to lose all contact with the very notion of locality. The logic here is like the logic of being at home. Suppose that you are in your house. What are you doing? Well, among other things, you’re being at home. Now you look out the window, and see your neighbour inside his house. He is doing what you are doing: you both are being at home. But where he is at home, you would not be at home. Where you are at home, he would not be at home. Only insofar as there is not-being-at-home is there being-at-home. Only insofar as it is possible not to be a local does it matter in the slightest to be a local.

So the Google vision, which seems so light and wondrous, may be quite the opposite.

But nobody notices.

epistemological thought for the day

The very idea of a database seems to me epistemologically questionable. For the idea rests on — or, perhaps, projects — a supposition of data as finite.  For if data is infinite, then the idea of a collection of some data, presumably, is useless and stochastic. It would be like having a collection of some numbers (assuming numbers to be infinite). What’s the point? You might as well have just one — the one you are working with, when you’re trying to work out some problem. For one in the number, compared to infinity, is all you can ever have.

Only if data (in the last analysis) is finite can the idea of a database make any sense. Only if data is finite can the idea of an ever-larger database make sense. Therefore, the more committed one is to the idea of a database, the more committed one becomes to the presupposition that data is finite. In this way, every database, from the Domesday Book to Google, enforces upon us an epistemological assumption. The “bigger” the database, the more powerfully enforced is the assumption.

Do we know this assumption to be correct? Can we? I don’t know (perhaps there is an answer in theoretical physics) but I doubt it. And if not, isn’t it kind of problematic for us as a culture and society to be increasingly committed to the idea of the database? Aren’t we begging the question of what knowledge, fundamentally, is, or might be?