But it would be really interesting if true--in my database of mostly non-newsy texts, do authors maybe shy away from using words that have too specific a meaning at the present moment? Lack of use might be interesting in all sorts of other ways, even if this one is probably just a random artifact.
In general, is there some way of finding books that use a word much less than context would suggest--and could we then ask why? That's nearly impossible with existing electronic resources—I've been working on a long post about that—but it might give interesting ways of thinking about single texts we know well. This is a way massive textual statistics could help in looking at a single book. We could find out what words Thorndike avoids in his psychological texts that we'd expect to see a lot of given the broader context of early 20th century psychology—or proto-behaviorism, or name your area. Most of these would be expected, but some might not.
That's the single-text use. On massive corpuses: If word-use shows intellectual context, lack of use might do so too. It's sort of the textual equivalent of listening for 'the notes they don't play.' The most obvious example might be to figure out how to grab some late-19th-century American history and look for the elision of certain slavery-related words. That's one of the areas existing historiography has already made a lot of hay out of missing terms. Could we find areas it hasn't?
But in either case--we need large text sources to build models of words we'd expect to see. Even more than positive counts, negative counts is something that's really hard to do through traditional close reading unless you have a very clear idea what you're looking for.