a recognition

This morning, my little boy, 4, asked me:

“What does ‘unconscious’ mean?” We were driving up the sunlit road toward his daycare.

I said, glancing back at him: “It means like when you’re asleep. That’s called being unconscious.”

He thought about that for a minute. Then he said, “I think when I die, I’ll be unconscious. I’ll be asleep. But you can’t breathe when you die,” he went on. “But we have to breathe.”

Driving through the sunshine, I said the kind of thing I always say when my children bring up the topic of death. “Little friend,” I said slowly, “you don’t have to worry about that — that’s not going to happen for a very, very long time. And no matter what,” I went on, “I am always always always always going to love you!”

He said, “your voice sounds funny.”

Question: When I tell my children that I will love them forever, am I lying?

I do not believe that I am.

This is why, in the last analysis, I am respectful of people of faith.

I know nothing more than they do; and they have chosen to live within the validation of a moral intuition that seems to me transcendent.

Also: its objects — such as eternal life — may not be real.

But longing is real; anguish is real; love is real.

the safe side

Bin thinkin again about dialogue – a big theoretical topic for me. My thinking about it is based on my reading of Hans-Georg Gadamer, for whom the ordinary business of talking with another person is the venue, and the model, for any understanding of anything. The significance of dialogue is primarily epistemological, not civic or ethical – except insofar as Gadamer’s insights tend to run counter to soft-left cant. There is no such thing, according to Gadamer, as seeing a given question from somebody else’s point of view. If there were, we wouldn’t need to talk to others at all. By the same token, there is no dialogic imperative to temper or suppress your own views, in the name of politeness or respect. For the only way your interlocutor can gain possible access to your perspective is if you try to express it, clearly and fully.

Indeed, Gadamer argues that we do not succesfully show respect by holding ourselves back in conversation – as though supposing that the other is too weak for us, or that we know all about him already. Quite the contrary: the true dialogic attitude is an attempt to maintain complete openness, which extends to our interlocutor precisely because it starts with ourselves. The goal, meanwhile, is not to obtain or maintain good relations, but to understand something – what the dialogue is about, its subject-matter. For the other’s view of the subject-matter is the indispensable confirmation, or disconfirmation, of our own.

Thus Gadamer’s philosophy of conversation is Socratic, and, in a sense, selfish. Conversation is the interactive game that we must play if we want to know. As in any game (contra Derrida), it is normative to try to play well. Finally, the game cannot even get going without a kickoff – which, as Gadamer puns in German, amounts to giving offense (Anstoss). Conversation is a space of risk, or it ain’t conversation at all.

So I have thought, for the last ten years or so. As a matter of fact, Gadamer’s view of conversation has seemed to me so compelling that I have not really even granted the possibility of a validly countervailing theory. Obviously, this is an attitude of meta-dialogic complacency, itself standing in need of an offensive shock. And perhaps there is another, opposing, yet possibly correct way of theorizing the very nature of conversation; which has the capacity to enrich thinking about it, without displacing the Gadamerian view.

This possibility was opened up for me, as often happens, by a student. The student was very bright and capable, but not especially keen on my teaching – you can always tell – which she seemed to find questionable or dubious or troubling. I am always fascinated by students who aren’t buying what I’m selling, in part because they frustrate me (I’m not going to lie about that), but also in part because I take it for granted that they may have something to teach me. Anyway, this student – let’s call her Leah – was a member of an upper-level, theoretically-inflected seminar I was teaching. The students in the group were all working on projects they had formulated themselves, with the help of different faculty supervisors in the English Department. The literary profession being what it is, few of these students had projects that we might call transitive to the world. Rather, their projects were reflexive to the profession. Their goal, in other words, was to write a certain kind of text – a clever essay; not to figure out, or even identify, a problem with a certain subject-matter.

As usual, in this sort of situation, my pedagogic approach was to ask questions – questions about questions, if we want to be cute about it. I pressed my students to try to tell me what they were fundamentally after; what they were trying to ask; and about what; and why. Some students responded well, and I felt that our conversations were useful for them. Leah, by contrast, clammed up. I could see that she was smart, and that she had something she wanted to push back at me. But I could also see that if I asked her what it was, she would find my inquiry aggressive and back off even farther. So I left her to herself, and kept up my hermeneutic pressure on her classmates. Leah just sat there, for weeks, dutifully and rather grumpily, poking at her laptop, looking at her nails, and rolling her eyes.

Until one day she put up her hand and said: “I guess I have some problems with the approach we’re taking here. You’re always asking us to explain why we’re doing what we’re doing, and what the theories we’re working with actually achieve. Why do they have to achieve anything? I mean, we learn these theories, and they apply to texts in certain ways, and once we know the theories we can apply them to the texts. We can do the kinds of readings that the theories let us do. You always seem to be asking us what we’re finding out about from our texts, but I’m not really sure we need to be finding anything out. We’re not like researchers in biology or computer science or anything. We’re just producing readings.”

Leah’s rant had been worth waiting for. It amounted to an indictment of contemporary literary education that was all the more damning for being offered as a defence. I had been teaching, as she correctly perceived, the standing need to avoid dialectical vacancy in literary-critical practice. Leah took that claim and responded, positively, that dialectical vacancy – not being about anything – was the point of literary-critical practice. The literary classroom, in her view, was not a place where subject-matters were opened up through the asking of questions; but rather, a place where subject-matters were kept at bay by the reiteration of answers. This was “producing readings”: the interminable application of unfalsifiable theories to incidental texts with indeterminate results. This was what Leah felt she had been taught, in the four years of her B.A. It was conversation as finger-painting. I would not have thought that the antithesis of my own position could have been stated so baldly.

I told Leah that she had certainly sketched a different view of inquiry from my own. That was true, but lame. And here is the beginning of the point – or the asking of the question. I didn’t tell Leah exactly what I thought. Why not? She had certainly done her best to hit me with a rocket – which, in my Gadamerian view, is consistent with the way in which the conversational game ought to be played. In that sense, Leah’s defence of dialectical vacancy was self-cancelling – a good starting-point for refutation. But I didn’t offer one. Why not?

I suppose because, as Leah’s teacher, I wanted to encourage her to stay in the conversation. I wanted her to feel safe there. This, perhaps, indicates a confound to the Gadamerian dialectic. True, I had left Leah alone for all those weeks because I did not want to presume that I knew what she was thinking, or how her end of the conversation ought to be managed. It was part of my own exposure – my own risk, as it were – not to claim the right to compel her participation. And that refusal of compulsion extended to holding open her retreat, and even to making her feel that she did not need to take it. In other words, I could do a work-around on my tactical reticence to make it consistent with the Gadamerian dialectic of openness and fullness. But this would be evasive. Falsifying the conversation is falsifying the conversation – especially if it is done for the sake of the conversation. I held back on the offense I could have given in response to Leah, precisely in order to hold open the possibility that the conversation might continue to be productive. And I felt, and still feel, that this was optimal dialogic procedure. And so the confound: conversation as a space of safety, perhaps, is prerequisite to conversation as a space of risk.

What do we do with this insight (if it is one)? I’m not sure. But I am grateful to my student for helping me to see the complacency in my own relative comfort with conversational risk.


there too bits

About the nicest thing I can say about the papal conclave is that it provides an excellent opportunity to explain the fundamental technology of the computer age: the binary digit, or bit.

Information, in its computing-science sense, is a measure of the burden that any given message places on a communication system. It is a way to determine mathematically how much “bandwidth” will be necessary to the successful transmission of a given message; how statistically likely it is that a given transmission will be vitiated by “noise.” Informational engineers begin to answer these questions by considering any given message as selected from a finite set of possible messages. The larger the set, the more information is “produced” when one message is selected from within it.

So, for example: the amount of information in a message consisting of a single Chinese character (or ) is a function of how many Chinese characters there are for potential selection – about 50,000, in the very largest dictionaries. By comparison, the amount of information in a single letter of the alphabet (say, letter “b”) is a function of how many letters there are for potential selection: twenty-six, in the modern version of the alphabet. Therefore, and as a starting-place for what Claude Shannon originally called the “mathematics of communication,” we can very simply and broadly say that there is a lot “more information” in a single selection of a than there is in a single selection of a letter of the alphabet.

But exactly how much more? And more of what, exactly? There needs to be a discrete and iterable unit of information – an informational digit – for communication to become computable. The interesting problem here is that the informational digit can itself only be a function of message-selection from a set. Thus Shannon, in his seminal paper of 1947, refers to the “ten places of a desktop adding-machine,” and explains that selection of one place from among those will produce exactly one “decimal digit” – perhaps we could call that a decit – of information. By the same logic, selection of one letter from the 26 of the alphabet will produce exactly one alphabetic digit of information – let’s call that an alphit. Finally, selection of one from all the 50,000 will produce one -digit of information – perhaps we would have to call that, with apologies, a t. We want to say that a t is bigger than an alphit; an alphit than a decit. But to put the comparison in terms of any of these digits is impossible, because comparing them is exactly what we do not yet know how to do.

What is worse, the number of possible sets of possible messages is itself, presumably, infinite. Any selection from any such set will produce a unique informational digit. Therefore, the number of possible unique informational digits will, in turn, be infinite; and we will no more be able to quantify meaningfully between any of them them than we can between alphits and digits and zits.

Information theory solves this problem by logically determining the characteristic or necessary informational digit, in terms of the theory itself. This is the decisive and stipulative step that opens the way to the information age. Information, by definition, is selection from a set; therefore, the base unit of information must follow from the base set of selection as such. This, fairly obviously, is a set of exactly and only two possible messages: one way or the other, yes or no, on or off, 1 or 0. The binary digit, or bit, becomes the informational unit.

Shannon invokes it on the very first page of his paper, and uses it to explain how much information there is in a decimal digit: about 3.33 bits. True, Shannon’s immediate point is to show how easy it will be to convert between different logarithmic bases for information. But as we have seen, the conversion will be meaningless or impossible if there is no unit in which to express it; and it is not accidental that the informational digit comes to be standardized as binary. It is sometimes thought that the primacy of the bit, in modern information theory, is a function of the basic (very basic) physical structure of computers. Actually, it is the other way around: the very basic physical structure of computers is a function of the primacy of the bit in modern information theory.

Now each bit is basically a switch. Two possible linked settings, only one of which can be selected at a time, constitutes one bit of information. Turning that around, we see that a one-bit system (one switch) can handle messages selected from within sets of two. A two-bit system (two switches) can handle messages selected from within sets of four. A three-bit system, from within sets of eight. And so on. Each time we add a bit, a switch, to the system, we double the set of potential messages from within which selected messages can be handled by that system. By the time we get to an eight-bit system – conventionally, one byte – we already have the capacity to transmit messages selected from within sets of 256 (=28) possible messages. The massive capacity of modern computer systems is a remarkable, yet linear, expansion along the same lines (with the help of Boolean algebra, which allows the construction of logic gates). Each gigabyte of capacity multiplies our eight-bit system by approximately one billion times. An extraordinary technical achievement. Yet all it means (if one can dare to put it that way) is that we have a system consisting of approximately eight billion switches.

Anyway, the notification mechanism of the papal conclave is a very nice example of a two-bit system. The bits are: smoke | no smoke, white smoke | black smoke. These allow the transmission of any one of the following four possible messages: (1) a vote has been concluded (2) a vote has not been concluded (3) a pope has been selected (4) a pope has not been selected. That’s information.

As Dan Savage might put it: thanks, bundles of sticks.

habent, non habemus, papam

Why do the world’s media, including the Canadian national news broadcasts, have to stake out the papal conclave? Why must they wait and watch with such bright enthusiasm for the new Pope to emerge, as if covering some kind of super-fun Groundhog Day? WHY THE FUCK, after all that has happened, do we still treat the Roman Catholic Church as some kind of neutral, universal focus of human interest and enthusiasm? I for one would like to see its assets seized and its criminal leadership hounded into prison. I think THAT would be an appropriate response, or beginning of a response, to the horrific, predatory practices and traditions that have come to light in recent years — and we can all be quite sure that the vaster part of the iceberg remains hidden.