I have a neighbor who always drives much too quickly down the lane behind my house. Where kids play, people walk, etc.
I think he’s an asshole.
Now, I’ve just been reading the special report in this week’s Economist about Autonomous Vehicles (AVs). Basically, the report says: they’re near, they steer, get used to it. One claim that the E pushes very hard on this file is the utilitarian one. AVs are predicted to be, overall, much safer than cars. (Driven by humans. For short, cars.) For AVs will be programmed, we are told, *not to be able to do* the stupid things so many of us prefer when we get behind the wheel.
My question: my Ahole neighbor likes to drive too fast down the lane.
He has paid very good money for a sweet German SUV to do this in.
Is he really going to be satisfied, renting or owning, with a transportation package that does *not* support the function “faster down the lane”?
On the other hand: If the AV packages supposedly coming down the pipe will actually support this function–what’s the fucking point?
A tidy little BBC reiteration of the AI/robotics first-step fallacy, identified as such by the late Hubert Dreyfus in *1972*. And he was able to call it then, as I have argued in The Mirror of Information, because it was already old news at that time–going back to the very origins of modern computing! No, geeks, having taken the first step does not mean that the rest will, or even can, follow! Otherwise, as Dreyfus put it, the first person to climb a tree could claim to have made tangible progress toward reaching the moon!
The continuity and consistency of this fantasy technological discourse, since the early 1950s, are absolutely astonishing. Revealing, of course, a phenomenological commitment; having nothing directly to do with technological boundary-conditions at all.
I was reminded today of what Fredric Jameson finally, finally, finally says, in The Political Unconscious, in response to the putative question “what if you don’t believe in Marxism?”
He says, with an oleaginous sneer: “Well, then I guess you don’t believe in *history*.” And high-fives all around.
We’re all aware of the current hysteria around “AI” (which resolutely continues to fail to exist), robotics, etc. Along those lines, I’ve just been reading a serious discussion about how to make education “robot-proof.” It involves an interlocking set of supposedly “new” skills, having to do with peer interaction, data-analysis, etc., all adding up to something called “humanics” (yes, really)–which, as far as I can tell, is pretty much ideally designed to be mastered by robots.
The thing is: We already know what kind of learning is robot-proof. It is, as Gadamer would say, seeing what is questionable–or perhaps, in a more literal translation of fragwürdig, question-worthy, worthy of becoming an occasion for posing a question. Broaden this hermeneutic insight just slightly, and you get the utterly ordinary and yet totally profound challenge of (a) noticing what’s interesting and (b) trying to say why it’s interesting.
This is what I do, in the literary classroom, all the way down to the first-year level. In fact, especially there. I give my 200 or so 18-yr-olds, their heads buzzing with Civ 6 and XBox N, their mouths full of clichés about machine learning and Big Dayda, a poem. And I tell them to tell me something interesting about it.
THEY HATE THIS. But they kind of like it too. And for good reason, on both scores.
On the territory of their own hermeneutic consciousness, there neither are, have been, nor ever will be, any robots.
I have wanted for a long time to understand Paracelsus better. Finally getting the opportunity in developing a paper on Bacon, Timothy Bright (1551-1615) and the anonymous iatrochemical tract Philiatros (1615) for the upcoming Scientiae conference. Anyway, having read some of Pagel and Debus, high-water mark for me has up to this point been Charles Webster’s Paracelsus: Medicine, Magic and Mission at the End of Time (Yale 2008). I found this book fantastically informative, if somewhat plodding and shapeless. But now, in the course of a more systematic review of the literature, I have finally come upon Andrew Weeks’s Paracelsus: Speculative Theory and the Crisis of the Early Reformation (Albany 1997) and it is absolutely brilliant! So clear, so thesis-driven, so beautifully written and illuminating! Looking back at Webster, I find that he cites Weeks, once, dismissively, yet not substantively, and then calls him “Geoffrey Weeks” in the index! WTF, mensch?
Stephen Gaukroger has argued that, if you really want to talk about where and when modern natural science began, you have to look to the Christian assimilation of Aristotle in the medieval period. Aristotle’s empiricism, though not an experimentalism, nonetheless prepared the ground and even planted the seeds for the latter.
Why did the church need Aristotle? Because he’s really really good on identity problems: what it is for something to be different and/or the same, re: something else.
Why did the church need help with that kind of problem? Well, because of the doctrine of the Trinity.
Why was there a doctrine of the Trinity? Well, because of the incarnation and subsequent crucifixion of Christ.
So, if you cook it all down: modern natural science began with Christ.
Dawkins to heavens: Magnificat!
(Hasty disclaimer: Gaukroger doesn’t argue that all the way down.)