All posts by wordlab

Mar 7, 2019 Discussion: Teaching About ‘Global’ DH

A word cloud of student final projects from 2018’s EALC111/511 at Penn

Today we took a look at teaching “global” DH, specifically in terms of text analysis/mining methods, and had a discussion about several elements of that issue: how to teach, what to teach, and why to teach about digital humanities in languages other than English.

Continue reading Mar 7, 2019 Discussion: Teaching About ‘Global’ DH

Feb 21, 2019 WORD LAB: Paul Vierthaler

Today we were joined (virtually) by Dr. Paul Vierthaler of Leiden University, to talk about his work text mining Jin Ping Mei (a 17th century Chinese novel), in a presentation entitled “Intertextuality, Classification, and Late Imperial Chinese Literature.” Specifically, we looked at intertextuality via text reuse, and experimental attempts at authorship attribution along with text classification. He also taught us very clearly about PCA, a perennial challenge for WORD LAB participants! Read on for a detailed overview of our conversation.

Continue reading Feb 21, 2019 WORD LAB: Paul Vierthaler

Feb 15, 2018 WORD LAB – Laura McGrath (MSU)

Laura McGrath of Michigan State University (@lbmcgrath) joined us this week via Skype to talk about measuring “literary novelty,” a project she is working on with Devin Higgins and Arend Hintze. The former is MSU’s data librarian, and the latter is a professor at MSU who specializes in computational approaches to genetics and biology. They also collaborated with HathiTrust Research Center. The presentation today focused on introducing us to the collaborative project, then considering how to move into the second stage in the near future. Continue reading Feb 15, 2018 WORD LAB – Laura McGrath (MSU)

March 15, 2016 Article Discussion: Pace of Change

For this week, we read “How Quickly Do Literary Standards Change?” by Ted Underwood and Jordan Sellers. In addition, Ted wrote an accompanying blog post.

Our initial question was, does Underwood and Sellers’s argument stand up? While we were concerned about hand-picking data, one member suggested that “the sampling was sampling” and this was convincing. In addition, it felt that the random sample truly could be as random as possible given that the HathiTrust archives of 19th-century texts is better than other periods. We discussed why that might be: deaccessioning, preservation, simply the sheer numbers of books produced in the 19th century as opposed to earlier, OCR accuracy, and copyright restrictions. Also, there was the procedure of digitization itself — University of Michigan digitized its research collections, for example, rather than things in special collections, and this might have included many more 19th-century books.

We also asked what exactly significance and prestige mean. One member brought up the example of Wendell Harris’s article “Canonicity” and his argument that texts become part of the canon based on being part of the conversation; this would go along with the idea that things being reviewed in magazines are significant in that they are being talked about in the first place, whether positively or negatively. And even if things were being negatively reviewed, they still helped to shape literary conversation and thus “the canon” and what survived over time. One member also raised the question of doing sentiment analysis of some kind (whether yes/no or picking out significant words, as another member suggested) on the reviews and adding that data to the analysis.

A question was also raised by a Wharton member: with literary analysis of this kind, is it more about interpretation or explanation, and what is the outcome of the research? With business research, there would be an action suggested based on the conclusions. We ended up talking about how the paper is finding whether there is something at all to interpret or explain in the first place. The authors stop short of explaining or making generalizations, and emphasize the narrowness of their claims. (Whether one should take this approach or go out on a limb to make bigger, but potentially wrong, claims was also discussed at this point.) We also wondered that if they had success in prediction, does that mean there “is” an explanation somewhere of what makes things significant? Is there a latent pattern that exists, and that we might balk at recognizing, as humans?

Finally, we also discussed why the line was always going upward. It seems that this is because the works reviewed and random non-reviewed sample adhered more closely to the “standards,” whatever they are, over time for some reason. Again, we can speculate but not exactly explain what’s going on behind the scenes there.

One conclusion was that we buy the continuity or lack of change in standards over time. The reason is that we see this in other periods where there is a narrative of change, but in reality, much continuity: Meiji Japan (mid-to-late 19th century), and Lu Shi’s use of classical Chinese in his writing despite being at the vanguard of “modern” Chinese literature.

Addendum: Scott has been tinkering with this and is reducing the list of words and making better predictions, with between 100-400 words. A small subset is always in the list of words. It only looks at the training data to do word selection without seeing test data in advance. Depending on training data it picks slightly different sets but a set of about 15 always appear. The top of the list is “eyes” and if you use just the word “eyes” you get 63% accuracy! See Scott’s Github for the code and results.

We got a future potential WORD LAB project out of this discussion, so it was a very productive reading and session!

March 1, 2016 WORD LAB – Mark Ravina

On April 1, we were joined via Skype by Mark Ravina, professor of history at Emory University. Mark researches Japanese political history in the latter half of the 19th century. He told us about his recent text analysis work involving 19th-century Japanese history, and also with a student on the recent student protests involving race at various universities around the United States. We’ll be following up with Mark on his research in the Fall 2016 WORD LAB lineup. Continue reading March 1, 2016 WORD LAB – Mark Ravina

Feburary 23, 2016 Article Discussion: Text Mining Online Reviews

On Tuesday, we discussed “Digging for Gold with a Simple Tool: Validating Text Mining in Studying Electronic Word-of-Mouth (eWOM) Communication,” by Chuanyi Tang and Lin Guo (2013).

This article tackles the problem of text mining from a marketing perspective, testing whether text mining offers useful information in the study of eWOM (electronic word-of-mouth, aka online reviews). Tang and Guo conclude that while the Star Rating of an online review is the best predictor of people’s attitudes in the review, text mining can offer additional nuance.

Much of our conversation centered around the LIWC software used by Tang and Guo for their study. Essentially an amped-up text tagger, LIWC checks each word in a text against its range of dictionaries and produces a statistical breakdown of that text.

LIWC’s main strength seems to be its dictionaries, which are thoroughly-researched and allow for somewhat sophisticated tagging of words by a range of features: parts of speech, emotions (positive or negative), and many categories including “Body,” “Ingestion,” “Time,” “Money,” “Religion”–over 400 different categories in all.  In the case of online reviews, for example, Tang and Guo found that “Negations” and “Money” were both effective predictors. These dictionaries are, however, proprietary, and Christine pointed out the difficulty of accessing these full dictionaries in the latest version of the LIWC software.

We tested LIWC on a segmented version of Dickens’s David Copperfield, as a good example of a coming-of-age story, but weren’t able to find strong trends. The whole paper was a very interesting counterpoint to previous work we’ve discussed on text mining in the humanities, where it’s not always so easy to validate the result.

Thanks to Christine Chou for suggesting the piece and taking the time to give us a great overview!

January 19, 2016 Article Discussion: Authorship Case Studies

For this week’s WordLab discussion, we read David L. Hoover’s 2012 article, “The Tutor’s Story: A Case Study of Mixed Authorship.” (From English Studies 93:3, 324-339, 2012). In this article, Hoover looks at The Tutor’s Story, a novel by Victorian author Charles Kingsley, finished by his daughter Mary St Leger Kingsley Harrison, writing under the name Lucas Malet. Hoover uses this text as a test case to compare a range of authorship attribution methodologies, including Burrows’s Delta, Craig’s version of Burrows’s Zeta, and t-tests.

Hoover compares his results to an annotated version of the published text, discovered partway through his research, containing Malet’s own markings about which parts of the text are hers and which are Kingsley’s.

The Methods

We spent most of our time today piecing out the specifics of the different methods. There seem to be two stages to the process–1) selecting the words which will compose the “fingerprint” of the author’s style, and 2) analyzing the statistical similarity of these words. More of Hoover’s explanation covers variations on the first part. For example, Delta uses the most frequent words in each text, while Zeta is based “not on the frequencies of words, but rather on how consistently the words appear” (329). Our consensus was that we were interested in reading more about this, and we may move to some of the original articles on the Delta method in future weeks. The R-stylo package also apparently has commands for Delta and Zeta which we could explore.

Big Picture Questions

Our discussion also brought up some larger conceptual issues around the question of authorship attribution. How are the results affected based on what we choose to use as the master corpus? How much does an author’s style vary based on different genres? (Malet’s only children’s book, Little Peter, is mentioned multiple times as disrupting the analysis, perhaps because of the smaller word range in a children’s book.) How is the relatively clean-up choice between two potential authors this different from the problems faced when we have more possible authors?

Significantly, Hoover’s results sometimes disagree with Malet’s markings, but it also is not entirely clear when Malet made those markings and how reliable they are. How confident do we need to be in the machine’s results before we start trusting the machine over the human?

Further Implications & Links

We touched base on a couple of similar problems, including Andrew Piper’s prediction of the Giller Prize. We also discussed the recent discovery of Dickens’s annotated set of the journal All the Year Round, which settled a ten-year computer textual analysis project trying to determine who wrote what in the anonymous journal.
We also noted, a tool for running statistical analyses (and helping you figure out which statistical method is the best in your situation).

December 1, 2015 WORD LAB – Beth Seltzer

Beth Seltzer is a library postdoc at Penn Libraries, who works with Victorian detective fiction and is a recent Temple University Grad. You can find her on Twitter: @beth_seltzer. Her title is ‘Drood Resuscitated’!

The presentation is on her work on Charles Dickens’s last novel, The Mystery of Edwin Drood, which was serialized until and just a little after his death because he had written a few more installments before he died. Many unanswered questions at the end of the book, including about the characters and whether it was even going to be a detective novel in the first place.

Continue reading December 1, 2015 WORD LAB – Beth Seltzer