Wednesday, December 1, 2010

Digital Humanities and Humanities Computing

I've had "digital humanities" in the blog's subtitle for a while, but it's a terribly offputting term. I guess it's supposed to evoke future frontiers and universal dissemination of humanistic work, but it carries an unfortunate implication that the analog humanities are something completely different. It makes them sound older, richer, more subtle—and scheduled for demolition. No wonder a world of online exhibitions and digital texts doesn't appeal to most humanists of the tweed– and dust-jacket crowd. I think we need a distinction that better expresses how digital technology expands the humanities, rather than constraining it.

It's too easy to think Digital Humanities is about teaching people to think like computers, when it really should be about making computers think like humanists.* What we want isn't digital humanities; it's humanities computing. To some degree, we all know this is possible—we all think word processors are better than pen and paper, or jstor better than buried stacks of journals (musty musings about serendipity aside). But we can go farther than that. Manfred Kuehn's blog is an interesting project in exploring how notetaking software can reflect and organize our thinking in ways that create serendipity within one person's own notes. I'm trying to figure out ways of doing that on a larger body of texts, but we could think of those as notes, themselves.


At times I think the best way to sell this whole enterprise might be as a kind of Derridean game around the ruins of language. Wordcounts are a way of temporarily taking seriously the death of the author, and viewing our language as an autonomous network that shifts and moves in time.

Historians think numerically (at least they used to) about dollars, shipments, letters, and cities. So we need to show how statistics can 'think humanistically,' too, which I'm sure they can. Too much of the old digital history—cliometrics, a word no one is eager is to revive—was about forced hypothesis testing, error bands and all. It worked well for authorship attribution, and not much else. Instead, we need to program computers to do things historians already value, using their strengths to do it faster and more rigidly than we can ourselves—draw distinctions between words, find trends behind temporary fluctuations, etc. Many of these tools exist, and are worlds away from the T-tests intro stats classes impress on the next generation of scientists. We should be using them. Marketers use them to understand where to open stores, bankers use them spot trends one the news wires or Twitter to help them decide when to be and sell, politicians use them to learn where to turn out their vote. We can get a lot out of them, too.

*Making computers think like humanists—and making humanists think like humanists about research done by computers and accept its fuzziness and loose ends, rather than adopt the stance that they're both very fond of and very bad at, where they immediately throw out swathes of scientific research on the grounds of some minor piddle--a not completely random sample, known errors in the metadata. Our physical archives are flawed and incomplete, and so will be our digital ones.

No comments:

Post a Comment