I'm in Moscow now. I still have a few things to post from my layover, but there will be considerably lower volume through Thanksgiving.
I don't want to comment too much on yesterday (today's? I can't tell anymore) article about digital humanities in the New York Times, but a couple e-mail people e-mailed about it. So a couple random points:
1. Tony Grafton is, as always, magnanimous: but he makes an unfortunate distinction between "data" and "interpretation" that gives others cover to view digital humanities less charitably than he does. I shouldn't need to say this, but: the whole point of data is that it gives us new objects of interpretation. And the Grafton school of close reading, which seems to generally now involve writing a full dissertation on a single book, is also not a substitute for the full range of interpretive techniques that play on humanistic knowledge.
(more after the break)
2. The article hints at, but doesn't fully explain, the conscious retrenchment of history into innumeracy in recent decades. I could post a little jeremiad about this at some point. But that literature departments seem to lead much of this charge is somewhat sad.
3. Timothy Burke has a good riposte at the more luddite commenters around the second page of comments on the article.
4. Both the article and the commenters seem to occasionally blur the difference between digitization, digital curation, and quantitative research (meant broadly--research for publication, for visualization, for teaching). This blog, to be clear, draws on the first, which has little to do with the humanities per se, to do the third. I generally have little interest in the second, but I'm not the sort of person who spends much time at local history society exhibits, either. Maybe there's another category in there.
5. The role of Google in all this is extremely fraught. I'm using Internet Archive books that were scanned by Google, but are now wholly in the public domain; the OCR, whatever its problems, is (I *think*) by IA and should be in the public domain. It would be extremely bad for the field, I think, if the use of Google metadata and analysis tools became de rigeur for research on digital texts, something which could easily happen. On the other hand, no one else has a database of texts that can be scanned from after 1922, except for the scattered results of google scanning at various libraries. I was a big fan of someone's — was it Dan Cohen again? — request to include Library of Congress digitization in the stimulus package, but that was not to be.