[Milton-L] Culturomics? Genome?
jshoulson at mail.as.miami.edu
Fri Dec 17 11:11:44 EST 2010
Thanks for raising this, Tom. There's a similar article in the NY Times today (see this link:http://www.nytimes.com/2010/12/17/books/17words.html) and I heard a report about it on NPR last night.
I, too, was struck by the apparent impulse to use scientific terms as though that were necessary to give the humanities greater credibility and, more importantly, fundability. It should be pointed out, though, that the researchers who are quoted in this article and publishing their results are mathematicians, not literary scholars or even socio-linguists.
I'm also suspicious of those claims about linguistic novelty and "dark matter."
I suppose my view here is that, as with just about any new technology, what one CAN do with it far outpaces what one OUGHT to do with it or what genuine insights it can provide. The answer to your cris de coeur, I think, is for humanities scholars to learn more about these technologies and to integrate them into what we have already been trained to do. There are some thoughtful folks out there who are working in more helpful fashion (two names that quickly come to mind are Kathy Rowe and Jeffrey Shandler), but it's no surprise that this is how it first gets publicized in the larger media.
Jeffrey S. Shoulson, Ph. D.
Associate Professor of English and Judaic Studies
University of Miami
PO Box 248145
Coral Gables, FL 33124-4632
ON LEAVE, AY 2010-11
Katz Center for Advanced Judaic Studies
University of Pennsylvania
420 Walnut Street
Philadelphia, PA 19106
(o) 215-2381290, ext. 413
jshoulson at miami.edu<mailto:jshoulson at miami.edu>
On Dec 17, 2010, at 10:53 AM, Thomas H. Luxon wrote:
I read this in today's Guardian about two "culturomics" researchers at Harvard who are using Google data and $ to study the English language "genome":
"In their initial analysis of the database, the team found that around 8,500 new words enter the English language every year and the lexicon grew by 70% between 1950 and 2000. But most of these words do not appear in dictionaries. "We estimated that 52% of the English lexicon – the majority of words used in English books – consist of lexical 'dark matter' undocumented in standard references," they wrote in the journal Science (the full paper is available with free online registration)."
Let's talk a bit about terms like "culturomics" and "genome" and the apparent need to sound like a scientist (a wacky scientist at that) in order to be taken seriously by the media and govt grant dispensers these days.
But first, let me try to cast some doubt on the notion that 52 % of the English lexicon (as represented by 4 % of the books ever published in English) the majority of words used in English books do not appear in any dictionaries or other reference books. This claim falls so far outside my experience as a reader and dictionary user that I want say. Are you kidding? Maybe their computer algorithm is good at searching a word database and very very poor at using a dictionary. I suspect that their search algorithm (Harvard's, not Google's) fails to allow for any sort of conjugation and inflection, so, for example, the word, "indirectly" comes up as "dark matter." Is this the future of high-funded digital humanities? What can we do about this?
Cheheyl professor and Director
Dartmouth Center for the Advancement of Learning
Professor of English
Milton-L mailing list
Milton-L at lists.richmond.edu<mailto:Milton-L at lists.richmond.edu>
Manage your list membership and access list archives at http://lists.richmond.edu/mailman/listinfo/milton-l
Milton-L web site: http://johnmilton.org/
More information about the Milton-L