How I stopped worrying and learned to be bored by AI discourse

I’m thinking about maps and artificial intelligence.

Not the way you think. I’m not talking about augmented reality or the return of Google Glass or geek fantasies of robot drone guides.

I’m talking about a good old-fashioned paper map.

Suppose you are lost in a strange city. An experience that ranges from disorienting to terrifying. (If you’ve had it, you know.) You are completely at the mercy of your surroundings.

Then somebody hands you a map. Let it be as crude as possible–small, detail-poor, torn. Nonetheless, you can suddenly orient yourself. Act like you know this place, to some extent. Find your way around.

What has happened here? Clearly, an encounter with information technology. An encounter, that is, with information as technology–that tool, that object, which you have attached to your being. The tool becomes, for the moment, the very vector of your being. Person-with-map is a cyborg. But that’s precisely how s/he attains the requisite functionality as a person.

And more than that. By acquiring the map, and starting to use it, you have acquired analytic abilities that were not yours before. You know where that street leads. Which way to the hospital. How to find a hotel. And so on. Mapless people, also lost in this city, can cling to you.

You have become smarter. Artificially.

Two points. One, the phenomenon of information is only ever encountered technologically–however creased, crude, or basic the tool in the encounter may be. “Information technology,” I guess, is itself becoming an old-fashioned phrase, and good riddance. It’s redundant. Technology is not all informational. But all information is technological.

And two: information technology is always artificial intelligence–again, no matter how simple or ungeeky the informational tool. This is why, I think, the horizon of the Singularity keeps receding. It’s not a transformation in our relationship to information. It is our relationship to information.

But it is we, not the map, who become AI.

where everybody knows your name who cares

One goal of Google and the other IT hegemons, with their wearable interfaces and online glasses and enhanced realities and data-rich maps, is to make it so we are no longer alien anywhere. Thus Michael Jones of Google Maps, interviewed in The Atlantic recently, millenially and yet banally asserted that in the future, your phone or glasses or whatever will just automatically give you directions, no matter where you are, also enriching your every moment with information targetted to your exact spot and behaviorally-determined preferences — eg around which corner is to be found  a great bar, what to order, who else may be there, etc. Jones: “It’ll be like you’re a local everywhere you go.” Around every corner, be it in Beijing or Toronto or Timbuktu, you can find your Cheers.

But clearly: to be a local everywhere is to be a local nowhere. It is to lose all contact with the very notion of locality. The logic here is like the logic of being at home. Suppose that you are in your house. What are you doing? Well, among other things, you’re being at home. Now you look out the window, and see your neighbour inside his house. He is doing what you are doing: you both are being at home. But where he is at home, you would not be at home. Where you are at home, he would not be at home. Only insofar as there is not-being-at-home is there being-at-home. Only insofar as it is possible not to be a local does it matter in the slightest to be a local.

So the Google vision, which seems so light and wondrous, may be quite the opposite.

But nobody notices.

epistemological thought for the day

The very idea of a database seems to me epistemologically questionable. For the idea rests on — or, perhaps, projects — a supposition of data as finite.  For if data is infinite, then the idea of a collection of some data, presumably, is useless and stochastic. It would be like having a collection of some numbers (assuming numbers to be infinite). What’s the point? You might as well have just one — the one you are working with, when you’re trying to work out some problem. For one in the number, compared to infinity, is all you can ever have.

Only if data (in the last analysis) is finite can the idea of a database make any sense. Only if data is finite can the idea of an ever-larger database make sense. Therefore, the more committed one is to the idea of a database, the more committed one becomes to the presupposition that data is finite. In this way, every database, from the Domesday Book to Google, enforces upon us an epistemological assumption. The “bigger” the database, the more powerfully enforced is the assumption.

Do we know this assumption to be correct? Can we? I don’t know (perhaps there is an answer in theoretical physics) but I doubt it. And if not, isn’t it kind of problematic for us as a culture and society to be increasingly committed to the idea of the database? Aren’t we begging the question of what knowledge, fundamentally, is, or might be?