[Milton-L] Culturomics? Genome?

Campbell, W. Gardner Gardner_Campbell at baylor.edu
Fri Dec 17 13:38:26 EST 2010


I'm not sure what we're called to respond to here. The Guardian article? The New York Times article? The Science article? Inelegant neologisms (though I confess "culturomics" doesn't really upset me, and "genome" intrigues me the way all metaphors do)? The omission of literary scholars from the research team? Faddishness? (If the latter, who in the academy will 'scape whipping?)

Or is it the idea that quantitative data can count as evidence in the work of the humanities, interpretive or otherwise? We've already formulated our response to the last idea. Quantitative data have always counted as evidence, though we are free, even obliged to question the source and interpretation of these data, as we are with everything brought forward as evidence in any argument. 

I'm intrigued and excited by the possibilities here, just as I am by the so-called "neuroscientific turn" in some areas of literary scholarship. It's a bit dismaying that with all the well-founded skepticism and caution in our discussion there's not an equal sense of excitement and sheer unbridled curiosity. But then the advent of print technologies was also seen as a barbarians-at-the-gate moment, as we know. Shouldn't that example help us be a little readier to enlarge our sympathies at this very interesting moment in human history?

I hope so. The Science article concludes with a brief discussion of culturomics, defining it thus:

"Culturomics is the application of high-throughput data collection and analysis to the study of human culture. Books are a beginning, but we must also incorporate newspapers ... manuscripts ... maps ... artwork ... and a myriad of other human creations.... Of course, many voices--already lost to time--lie forever beyond our reach."
[new paragraph]
"Culturomic results are a new type of evidence in the humanities. As with fossils of ancient creatures, the challenge of culturomics lies in the interpretation of this evidence."

I take this to be a real challenge, a welcome challenge, not the only challenge worth taking up, but certainly one we ought to embrace. Our expertise, our training, perhaps even our sensibilities can help to refine both methods and analyses. Otherwise, we seem to be saying that this evidence is useless, meaningless, irrelevant. It seems to me that would be a difficult case to make without resorting to massive amounts of special pleading.

Gardner


--until February 18--
Dr. Gardner Campbell
Director, Academy for Teaching and Learning
Associate Professor of Literature, Media, and Learning, Honors College
Baylor University
One Bear Place #97189
Waco, TX 76798-7189
254-710-4064
www.gardnercampbell.net

--beginning February 21-
Dr. Gardner Campbell
Director, Professional Development and Innovative Initiatives, Division of Learning Technologies
Associate Professor of English
Virginia Tech
Gardner.campbell at vt.edu



-----Original Message-----
From: milton-l-bounces at lists.richmond.edu [mailto:milton-l-bounces at lists.richmond.edu] On Behalf Of Shoulson, Jeffrey
Sent: Friday, December 17, 2010 10:12 AM
To: John Milton Discussion List
Cc: Richard Strier; C. Robertson McClung; Aden Evens; Mary at koko.richmond.edu; Flanagan; Katharine Conley
Subject: Re: [Milton-L] Culturomics? Genome?

Thanks for raising this, Tom.  There's a similar article in the NY Times today (see this link:http://www.nytimes.com/2010/12/17/books/17words.html) and I heard a report about it on NPR last night.

I, too, was struck by the apparent impulse to use scientific terms as though that were necessary to give the humanities greater credibility and, more importantly, fundability.  It should be pointed out, though, that the researchers who are quoted in this article and publishing their results are mathematicians, not literary scholars or even socio-linguists.

I'm also suspicious of those claims about linguistic novelty and "dark matter."

I suppose my view here is that, as with just about any new technology, what one CAN do with it far outpaces what one OUGHT to do with it or what genuine insights it can provide.  The answer to your cris de coeur, I think, is for humanities scholars to learn more about these technologies and to integrate them into what we have already been trained to do.  There are some thoughtful folks out there who are working in more helpful fashion (two names that quickly come to mind are Kathy Rowe and Jeffrey Shandler), but it's no surprise that this is how it first gets publicized in the larger media.

Best,

Jeffrey



Jeffrey S. Shoulson, Ph. D.
Associate Professor of English and Judaic Studies
University of Miami
PO Box 248145
Coral Gables, FL 33124-4632

(o) 305-284-5596
(f) 305-284-5635

ON LEAVE, AY 2010-11
Katz Center for Advanced Judaic Studies
University of Pennsylvania
420 Walnut Street
Philadelphia, PA 19106

(o) 215-2381290, ext. 413

jshoulson at miami.edu<mailto:jshoulson at miami.edu>
www.as.miami.edu/english/people/#jshoulson<http://www.as.miami.edu/english/people/#jshoulson>





On Dec 17, 2010, at 10:53 AM, Thomas H. Luxon wrote:

Fellow scholars,

I read this in today's Guardian about two "culturomics" researchers at Harvard who are using Google data and $ to study the English language "genome":

"In their initial analysis of the database, the team found that around 8,500 new words enter the English language every year and the lexicon grew by 70% between 1950 and 2000. But most of these words do not appear in dictionaries. "We estimated that 52% of the English lexicon - the majority of words used in English books - consist of lexical 'dark matter' undocumented in standard references," they wrote in the journal Science (the full paper is available with free online registration)."

Let's talk a bit about terms like "culturomics" and "genome" and the apparent need to sound like a scientist (a wacky scientist at that) in order to be taken seriously by the media and govt grant dispensers these days.

But first, let me try to cast some doubt on the notion that 52 % of the English lexicon (as represented by 4 % of the books ever published in English) the majority of words used in English books do not appear in any dictionaries or other reference books.  This claim falls so far outside my experience as a reader and dictionary user that I want say. Are you kidding?  Maybe their computer algorithm is good at searching a word database and very very poor at using a dictionary. I suspect that their search algorithm (Harvard's, not Google's) fails to allow for any sort of conjugation and inflection, so, for example, the word, "indirectly" comes up as "dark matter."  Is this the future of high-funded digital humanities?  What can we do about this?

Tom Luxon
Cheheyl professor and Director
Dartmouth Center for the Advancement of Learning
Professor of English
_______________________________________________
Milton-L mailing list
Milton-L at lists.richmond.edu<mailto:Milton-L at lists.richmond.edu>
Manage your list membership and access list archives at http://lists.richmond.edu/mailman/listinfo/milton-l

Milton-L web site: http://johnmilton.org/


_______________________________________________
Milton-L mailing list
Milton-L at lists.richmond.edu
Manage your list membership and access list archives at http://lists.richmond.edu/mailman/listinfo/milton-l

Milton-L web site: http://johnmilton.org/



More information about the Milton-L mailing list