We’re all aware of the current hysteria around “AI” (which resolutely continues to fail to exist), robotics, etc. Along those lines, I’ve just been reading a serious discussion about how to make education “robot-proof.” It involves an interlocking set of supposedly “new” skills, having to do with peer interaction, data-analysis, etc., all adding up to something called “humanics” (yes, really)–which, as far as I can tell, is pretty much ideally designed to be mastered by robots.

The thing is: We already know what kind of learning is robot-proof. It is, as Gadamer would say, seeing what is questionable–or perhaps, in a more literal translation of fragw├╝rdig, question-worthy, worthy of becoming an occasion for posing a question. Broaden this hermeneutic insight just slightly, and you get the utterly ordinary and yet totally profound challenge of (a) noticing what’s interesting and (b) trying to say why it’s interesting.

This is what I do, in the literary classroom, all the way down to the first-year level. In fact, especially there. I give my 200 or so 18-yr-olds, their heads buzzing with Civ 6 and XBox N, their mouths full of clich├ęs about machine learning and Big Dayda, a poem. And I tell them to tell me something interesting about it.

THEY HATE THIS. But they kind of like it too. And for good reason, on both scores.

On the territory of their own hermeneutic consciousness, there neither are, have been, nor ever will be, any robots.