Category Archives: Meeting summaries

Feb 15, 2018 WORD LAB – Laura McGrath (MSU)

Laura McGrath of Michigan State University (@lbmcgrath) joined us this week via Skype to talk about measuring “literary novelty,” a project she is working on with Devin Higgins and Arend Hintze. The former is MSU’s data librarian, and the latter is a professor at MSU who specializes in computational approaches to genetics and biology. They also collaborated with HathiTrust Research Center. The presentation today focused on introducing us to the collaborative project, then considering how to move into the second stage in the near future. Continue reading Feb 15, 2018 WORD LAB – Laura McGrath (MSU)

March 1, 2016 WORD LAB – Mark Ravina

On April 1, we were joined via Skype by Mark Ravina, professor of history at Emory University. Mark researches Japanese political history in the latter half of the 19th century. He told us about his recent text analysis work involving 19th-century Japanese history, and also with a student on the recent student protests involving race at various universities around the United States. We’ll be following up with Mark on his research in the Fall 2016 WORD LAB lineup. Continue reading March 1, 2016 WORD LAB – Mark Ravina

December 1, 2015 WORD LAB – Beth Seltzer

Beth Seltzer is a library postdoc at Penn Libraries, who works with Victorian detective fiction and is a recent Temple University Grad. You can find her on Twitter: @beth_seltzer. Her title is ‘Drood Resuscitated’!

The presentation is on her work on Charles Dickens’s last novel, The Mystery of Edwin Drood, which was serialized until and just a little after his death because he had written a few more installments before he died. Many unanswered questions at the end of the book, including about the characters and whether it was even going to be a detective novel in the first place.

Continue reading December 1, 2015 WORD LAB – Beth Seltzer

January 27, 2015 WORD LAB – Katie Rawson

Katie Rawson, WORD LAB co-organizer, presented on her current research and its background on January 27, the first WORD LAB of 2015. Her work focuses on the Southern Foodways Alliance (SFA) oral histories of food culture in the American South, and specifically on using topic modeling to analyze those narratives.

The SFA’s mission is to “document and celebrate food in the South” and also to engage in race reconciliation. It does this largely through collecting oral histories and making them (and transcripts in PDF format) freely available, as well as hosting films. Interestingly, Katie noted that the filmmaker who documents Southern food culture and workers is male, whereas the majority of the oral history interviewers are both white and female. In fact, she emphasized that they have particular aesthetics and particular stories they want to tell, although it’s also a question of what they can do with what is actually collected: they’re framing it, but they can’t control the content.

Katie was interested in analyzing the oral histories, which she went through for her dissertation, to discover themes in them – especially those related to gender roles, work, family, and business – that aren’t immediately apparent to a human reader.

She began her work by downloading all the oral histories and making them into plain text files, with little manual cleanup (they’re mostly easy to OCR and fairly standardized), then used Topic Modeling Tool (a GUI for MALLET) to do her analysis. She made her own list of stop words, which expanded greatly over time in specific ways; when trying to get past what “grouped them together already,” she found herself adding not just personal names and places but all food-related words to the stop list, making it longer and longer. This was an attempt to get at latent discourse that might gather the histories into different groups than a human would “by project” or “by theme,” or might show disparities within projects. (For example, topic modeling without this extensive stop word list just identifies projects like “how people talk about barbeque” – the ways in which SFA itself had already organized the histories.)

While Katie is getting something fascinating at the moment, it’s not the “language of running a business” and the “language of food and family” that she was hoping to uncover, and it’s also still not breaking up SFA’s premade sections. What’s the point of doing topic modeling if it’s just breaking the histories into the groups that they came in, in the first place, Katie wonders. Maddie Wilcox suggested that Katie could look at “what happens when you remove each layer” – first personal names, then places, then food – and also take a look at the exceptions rather than the obvious project divisions. Elias Saba agreed, suggesting to go even back before Katie started taking out stop words, and look at outliers: what are the ones that don’t show up in the groups that we know they are “in”? Elias also wondered what other words could be removed – for example, everything that indicates “why they were in the interview in the first place” (as Katie put it) – and then Katie would have a list of words that people are/are not using, perhaps interesting in itself as an outline of language.

Katie also explained a specific analysis she did related to gender roles that used topic modeling, in a subset of histories involving oysters. Specifically, she first ran Topic Modeling Tool on the transcripts, then circled every time a word within a topic had to do with a family relationship, striking out some topics or words as she went, and also highlighted words that had to do with business. She looked at both word frequency and also the relationship between words in topics, and how they were distributed. Katie went in with expectations but things broke down differently: rather than the discourse that was represented in an SFA film about the industry, the one that emerged from the histories was more about women’s work being empowering and interesting. They are the ones who run the household and finances, and they can make something when men can’t because their income is more reliable. Even though there is still a divide, Katie found that the discourse here had a rich story to tell about how work is negotiated in the space of the oyster industry, and what the divide means isn’t as apparent has it seems.

In the end, Katie still wondered about the efficacy of topic modeling in understanding the SFA’s oral histories, but as Brian Vivier pointed out, in at least one case she had found something new – despite having read almost every single interview already for her dissertation. Katie plans to continue her work, to hone the stop word list, and keep thinking about the applicability of topic modeling and how to make the methodology work for her material – and what that methodology might look like.

As a coda, the WORD LAB group raised the question of oral histories as a genre, and how they could be analyzed computationally as such. In addition, we all advocated making this kind of material – including Katie’s data such as plain text files – available freely so that it can be worked with more broadly. This is especially important in the case of oral history, which is less frequently studied as a genre, and in a less-studied geographic area such as the American South. We cheered on the recent release of slave narratives from DocSouth and hope that Katie’s work can contribute to the computational study of narratives of the South in environment.

November 4 WORD LAB – “Retranslating Musical Comedy for Shanghai’s Left-wing Film Movement”

Maddie Wilcox presented about her project “Retranslating Musical Comedy for Shanghai’s Left-wing Film Movement.”  She framed her problem – a question of naming a genre – and told us about how she explored and began solving it.

A filmmaker named Yuan Muzhi in the 1930s called for a new kind of film to be made in China – a term yinyue xiju, which translated directly “musical comedies.”  However, the kinds of film he was advocating for were not what one would classify as musical comedies based on Western definitions – a genre of films that were part of the foreign and homegrown film scene in China at the time. This genre is often translated as “sing-song pictures” (gechang jupian).

To begin exploring this question of genre names and translation, Maddie used the Shanghai Library database and the Media History Digital Library to explore date distribution of mentions of different genre names using full text searching.  This ultimately led her to explore the term “operetta” as a better translation for Yuan Muzhi’s idea of musical comedy.

In the Q&A, we discussed different models for exploring the language around these films, including topic modeling and keywords in context.  Ultimately, her research questions may be best served by moving between methods, which allow her to surface terms that she does not know about and then understand how and when they are being used.

We ended by exploring how the digital corpora she used were made and OCRed (a prelude to our discussion this past week).  We discussed the lack of digitizing or providing metadata for advertisements.  We examined exactly what was OCRed in the PDFs from the library, using a few methods.

In a lively series of experiments, using different tools on several machines (and the expertise of our Chinese readers), we uncovered what parts of the text had been OCRed, the quality of the OCR, and its arrangement.  It was, not surprisingly, dirty; however, we were surprised at how much of the text appeared to not be OCRed at all.  While it seemed that her results were still helpfully suggestive, we discussed what a difference having a fully OCRed text would be for keyword search.

November 11 OPEN LAB

This week we discussed Ryan Cordell’s Viral Texts project and paper “Infectious Texts: Modeling Text Reuse in Nineteenth-Century Newspapers” (forthcoming). Ryan visited Penn the previous week and gave a talk and a workshop about this project, so we seized the opportunity to further discuss his work at WORD LAB reading club.

Our discussion focused on how Ryan’s techniques might be applicable to other projects that our members are interested in, after we went over some finer points of the article (including wondering which genres the “fast” and “slow” words appeared in, and how they filtered out long-form advertisements). Brian Vivier wondered if the way they’re generating overlapping n-grams would allow for comparison in classical Chinese texts without whitespace for word divisions: picking a 5-gram, for example, would certainly catch some words. However, we also questioned whether we’d pick up too many word fragments and if this noise would be too much for the analysis.

This technique could possibly be used, we thought, to compare documents in a large corpus for text reuse, just as Ryan did with “viral” newspaper reprints, which would be extremely likely in the case of classical Chinese texts. For example, we could find when precedents are invoked or imperial decisions are cited. Where are the patterns, and would the noise fall out if we are looking for patterns like this?

We also felt that the paper was hesitant about making concrete conclusions and hard statements, and discussed that there is a difference in rhetoric between science and humanities: science is more experimental, more exploratory, and more likely to write about failure (although that’s certainly often not the case); humanities papers, meanwhile, tend to make big claims and only after the author is certain their position is solid. Well, we are making generalizations, but given that it was a room full of mostly humanities people, the tone stuck out and was surprising to most of us.

Finally, we talked about the applicability of Cordell et al’s ideas and techniques for other languages and time periods; for example, Maddie Wilcox brought up the similarities between antebellum America and Republican China in terms of printing instability, the spread of railroads, and the penetration of networks into rural areas. It would be interesting, too, to look at local republishing across genre, rather than geographically spread-out republishing. Finally, how about networks based on who studied abroad at the same time, who graduated together, and literary societies in Republican China? Endless possibilities.

The practicality of such projects, and the ethics of using available texts, also came up. We talked about improving OCR and the questionable legality of scanning an entire microfilm series of colonial newspapers (gotten via ILL) or on CD as PDFs. It would be great to compare across languages with Japanese, Chinese, and Korean colonial newspapers, for example. But the quality of the OCR is so poor perhaps this is only a pipe dream. It’s hard to argue for “intellectually viable OCR improvement” – if only we could think of a project and a grant!

Aside: We also covered more ground on the Python tutorial dealing with the Chinese Biographical Database API. It was anticlimactic: Molly found a bug in that made its JSON data invalid and so was unable to process it. Still, she explained the way JSON data is accessed and what it looks like in a script – if only it worked!

October 29 OPEN LAB

Our OPEN LAB time on October 29 was split into two parts: discussion of an article on authorship attribution, and step one of reading a CSV file, calling a web API, and then rewriting the file in Python. You can find the article here:

 

Ayaka Uesaka and Masakatsu Murakami

Verifying the authorship of Saikaku Ihara’s work in early modern Japanese literature; a quantitative approach Literary & Linguistic Computing, first published online September 29, 2014 doi:10.1093/llc/fqu049 (9 pages)

Our discussion centered on first the article’s assumptions and methodology, and then authorship attribution and its role in general. One point of contention is that the article attempted to differentiate one epistolary work from the rest of an assumed body of Ihara Saikaku’s works, but did not take into account the major stylistic differences between epistolary writing and the writing of typical fiction during the Edo period (1600-1868) in Japan. Thus, the stylistic differences that led the authors to suspect that Saikaku did not write this particular work could also simply just be due to the difference in genre. We thought that more work on genre could be a productive and interesting direction for this kind of research.

The Python tutorial covered reading in a CSV file using the unicodecsv library, how to import libraries in general, and how to access items in a list. It also demonstrated how to construct a URL and call the Chinese Biographical Database web API through urlopen(). Stay tuned for reading data from the API at the next OPEN LAB.

October 14 OPEN LAB

Laura Gibson, from the Annenberg School of Communication, led our discussion of “The Battle for ‘Trayvon Martin’: Mapping a Media Controversy Online and Offline.”  The authors collected a range of media with mentions of “Trayvon Martin” (and the common misspelling “Treyvon Martin”) from twitter, blogs, online media outlets, newspapers, and television in order to understand the media ecosystem around the killing. They used MIT’s Media Cloud to produce much of their evidence. We were especially interested how they created the data set and how other scholars might use their infrastructure. Laura’s research group’s is currently working with them. We discussed the promising avenues of sharing mined data — in this case, the identification of certain newspaper articles — and the  legal complications of actually gathering the text for subsequent research.  We also examined the various methods and tools the researchers used to analyze their evidence.

In the last part of the session, Molly led a fabulous Python tutorial based on Brian Vivier presentation the previous week.  We walked through object types, ways of easily creating paths for an API, and frameworks for querying and organizing imported CSVs.

Reading:
Erhardt Graeff, Matt Stempeck, and Ethan Zuckerman, “The battle for ‘Trayvon Martin’: Mapping a media controversy online and off–line,” First Monday, vol. 19: 2 – 3, February 2014
http://firstmonday.org/ojs/index.php/fm/article/view/4947/3821