Long Presentations

An Analysis of Public Space in Istanbul

Labron Palmer

Many remember the protests that took place throughout Turkey in the summer of 2013. While protesters raised many issues of discontent, the catalyst for the demonstrations was the potential demolition of Gezi Park nears Istanbul’s popular Taskim Square. Although the focus of the protests shifted away from the initial issue of open space (green, public, etc.), social issues of urban development persist as Istanbul continues to grow. Concepts related to urban sustainability (social equity) and gentrification are investigated in this project by examining socioeconomic factors that may potentially influence the prevalence of open space and residential access to these spaces in Istanbul, Turkey. This project will be conducted though a spatial (G.I.S.) and literary analysis.

Beyond Citations: The Historian’s Altmetrics?

Michelle Moravec

While citation analysis of academic journals has become a common method for visualizing scholarly networks in the digital humanities, Beyond Citations ask who counts if we count only citations? Just as the field of altmetrics attempts to evaluate the influence of new forms of communicating contemporary scholarship, Beyond Citations reassess, digitally, the influence of activists in the 1970s who may not have produced much by the way written work, but who were important participants in the development of feminist thought. Focusing on sister/outsiders, the countless women who live on in footnotes of the “academic colony,” Beyond Citations traces other routes of influence including participation on panels, attendance at conferences, inclusion in anthologies and movement periodicals, entries into bibliographies, acknowledgments, and mentions in texts themselves. Beyond Citations not only challenges scholarly modes of counting, but also highlights the politics of working digitally. Projects that seek to demonstrate the potential of working digitally often rely on available digitized sources, but that may erase certain bodies. Going Beyond Citations combines available corpora and metadata, with researcher-driven digitization (and all the concomitant challenges that come with that), to raise questions about database construction, training of NER, and parameters of network analysis. The resulting visualizations of overlapping worlds of feminist knowledge producers up the body count of the women who get counted as contributors to feminist theory.

Black Liberation 1969: A Case Study in Risky DH and Activist Histories

Nabil Kashyap

Digital Humanities projects are often framed in terms of risk, whether defying disciplinary boundaries or questioning traditional models of scholarship. There are, of course, other flavors of risk that emerge when faculty, students, records, institutions and living agents of history come into a conversation afforded by digital tools. I would like to discuss the processes and products of an ambitious, interdisciplinary course led last fall entitled Black Liberation 1969 that looked at the Civil Rights Movement through the lens of black student protest movements, specifically the events that occurred at Swarthmore College from 1968-1972. In addition to studying that history, the course explicitly aimed to “correct the narrative”, actively negotiating with former and current faculty and alumni for a nuanced understanding of a contentious, sensitive period in Swarthmore’s near history. In a sense, the course recapitulated the full information lifecycle of doing history, from finding and describing primary documents to making new records in the form of oral histories to interpreting and disseminating information. Connecting together these disparate facets was a content management system and public interface, at once an example of virtual unification and a pedagogical tool, a publishing platform and a kind of activism. I would like to explore the roles the site played as it both guided and documented the ways in which students and faculty navigated contested institutional and individual memories.

Bridges: A New Data-Analytic Resource for Digital Humanities 

Nick Nystrom, Rick Costa, and Joel Welling

The Pittsburgh Supercomputing Center (PSC), a joint research institute between Carnegie Mellon University and the University of Pittsburgh, recently received a National Science Foundation (NSF) award to create a uniquely capable supercomputer designed to empower new research communities, bring desktop convenience to supercomputing, expand campus access, and help researchers needing to tackle vast data to work more intuitively. Called Bridges, the new supercomputer will consist of three tiered, memory-intensive resources to serve a wide variety of scientists, including those new to supercomputing and without specialized programming skills. Bridges will bring the power of high-performance computing and flexible, sophisticated software environments to Big Data, enabling important research projects throughout Pennsylvania and across the country. While meeting with researchers working in various aspects of the digital humanities, for example, individuals studying rhetorical analysis, art history, and medieval manuscripts, we have been pleased to find substantial interest and opportunities. Using Bridges when it comes online and the Data Exacell, a research pilot project that is available now and will lead smoothly into Bridges, researchers anticipate breaking barriers in several ways. First, they anticipate scaling analyses that are currently limited by their laptops’ memory, storage, or computing capacity, allowing them to address much larger corpora. Second, they see benefit to having centrally-located and well-curated data repositories for cross-document analyses, collaboration, and dissemination. Third, some plan to build distributed systems of data and tools to, for example, incorporate new documents as they become available. This presentation introduces Bridges and the Data Exacell, which are available at no charge for open research. Through the Data Exacell and other sources, PSC can also provide expertise to help develop effective approaches to projects in the digital humanities and other fields.

Cell Phones, Databases, and the Ends of Cinematic Narrative 

John Hunter and Justin Eyster

Cell/smart phones pose a threefold challenge to mainstream cinema and television: (1) they were integrated into everyday life quickly and nearly universally and are thus unavoidable in any film narrative aspiring to realism (people under thirty spend up to 20% of their waking hours using their phones); (2) their use is hard to represent in a cinematically interesting way (few directors would dare to devote 20% of the screen time of a film or television episode to people using phones); and (3) the near-universal communication and data access that they offer negates many of the commonplace narrative motives in mainstream cinema (e.g. wondering what a far away character is thinking or where s/he is). How could the representation of this technology in TV/cinema be studied in all of its quantitative immensity? My current DH project involves constructing a relational database of films and TV episodes that includes the closed caption and/or hearing-impaired subtitle files for each item; each reference (direct or indirect) to cell phones (and, indeed, any other explicitly-articulated idea) subject will thus have a precise time stamp as well as a textual record of the relevant dialogue or direction With a modified version of Dublin Core as a metadata standard, this database is searchable by genre, year of production, nation of origin, and many other criteria. It enables users to find how cell phones are used in, say, the first ten minutes of two hundred films made in 2002 and two hundred more made in 2012; or whether cell phones appear more often in romantic comedies or action films. The prior work on using screenplay analysis has been narrowly targeted to commercial/financial criteria (Jehoshua Eliashberg’s work), to analyzing elements of the scripts as such (Arnav Jhala’s text mining approach; Murtagh et al), or has been focused around character interactions and their meaning (Hoyt Ponto and Roy’s fascinating ScripThreads essay in DHQ 8.4). My approach would allow for a much greater diversity of scholarly approaches and would solve the problem of using the highly unreliable screenplay texts that are found on Internet sites (as Murtagh and others do).

Chasing Krüger’s Dream: Visualizing the Transmission of Medieval Manuscripts Using Galois Lattice Theory 

John HesslerWithdrawn

How accurately have culturally fundamental texts from literature, law, science, geography, and philosophy been handed down from ancient Rome and Greece to the present by way of scribal copying in the middle ages? This fundamental question of how various manuscripts from a textual tradition have been transmitted through space and time has been the concern of scholars since at least the founding of the great Library of Alexandria in the third century BCE. Early Medieval scribes recognized that in the process of copying ancient texts mistakes were made, and that these errors became part of the textual tradition, to be passed on through history. They also realized that this process of copying error had a random or chaotic nature, and so they invented the demon Tutivillus, whom they considered to be the error’s source. Throughout the Renaissance scholars, like Erasmus, battled this demon in their attempts to re-construct important Latin and Greek manuscripts descended from antiquity. Later in the eighteenth and nineteenth centuries scholars, like Karl Lachmann and Paul Krüger, tried to systematize a method in order to determine which parts of medieval manuscripts were errors, and which were the real readings descended from the original authors. This paper will highlight a new computational technique based on the algebraic structure of Galois Lattices to visualize and analyze the transmission of medieval manuscripts through space and time. This technique allows the visualization of extremely complex manuscript recensions and the analysis of the relationships based on groups of common errors mathematically constructed as partially ordered sets. Using these methods, this paper will highlight the medieval recension and relationships between all fifty-two of the surviving manuscripts dated before 1200 of Justinian’s Codex, one of the most complex to survive, and in doing visually show the detailed temporal and spatial relations between groups of surviving manuscripts.

Collecting and analyzing visual data

Cheryl Klimaszewski

This long presentation will describe one approach to the process of collecting visual data as part of ethnographic field research. It will introduce a data collection/analysis workflow centered on digital images that emerged during a visit to a private, local museum in rural Romania. The object of study - the museum tour narrative - was a kind of show-and-tell during which museum proprietors demonstrated or pointed to museum objects in the course of narrative exposition. These punctuated moments took place within the overall visual field of the museum site (on the property also housing the owner’s residence), foregrounding certain objects or scenes as exemplary from among enumerated visual lists. From the research perspective, the “said” accompanied the “seen” in moments of intentionality and self-reflexivity which guided the gaze of the visitor/researcher. These intersections of verbal and visual stood out as a kind of punctum (Barthes, 2010) which in this research context came to represent analytic moments worthy of interpretation. Punctum then became the units of analysis that were mapped spatially and temporally, revealing symbolic classifications represented as zones within and pathways through the museum site, situating the property as a site of knowledge production. A synthesis of practical considerations will accompany this theoretical approach to visual data analysis. This presentation aims to expand the role that image-making can play in research, one that moves images beyond acting “simply” as illustrations of or accompaniments to the text.

Computers on Law & Order 

Jeff Thompson

Detailed accounts have been written of social media and cloud computing, but there is little about the more humble aspects of technological culture. Screensavers, bubblejet printers, computer desks, and other physical technologies are thrown away or updated. This paper examines how we can find anthropological details about our relationship with technology through popular media, specifically the television program Law & Order. In 2012, I was commissioned to create a project recording every computer on the television program Law & Order. After watching all 319 hours of the show and extracting 11,000 screenshots of computers, it’s clear that Law & Order forms a unique database of images and speech, one that reflects the fascinations, fears, and biases of its time. Law & Order’s long run and “ripped from the headlines” content makes it a useful lens through which to look at a major cultural shift: the rise and eventual ubiquity of computers and networked technologies over a crucial twenty year period. Using my Computers on Law & Order project as a case study, this paper focuses on how these details can be unearthed from media. Through a series of categorized screenshots and quotations, I examine pathways through the archive of the show: physical infrastructure of computers, software interfaces, and peripherals. The paper ends with a discussion of how research projects like this, created as speculative creative research derived from popular culture and whose main archive is posted entirely online, can form another possible trajectory for digital humanities scholarship. The project can be viewed at: http://www.computersonlawandorder.tumblr.com

Constructing an Imperial Lexiscape: Linguistic Layering and New Methods in Text Analysis 

Molly Des Jardin, Katie Rawson, Madeline Wilcox, Timothy Clifford, and Brian Vivier

We propose to talk about the dawn of the Imperial Lexiscape. The Lexiscape developed from the Penn Libraries text analysis research group WORD LAB and is a collaboration between librarians, graduate students, and faculty across institutions. The project examines the movement of information within the occupied territories of the Japanese empire in the first half of the twentieth century. Focusing on the print media circulating in Japan, China, Korea, Taiwan, and Manchuria at the time, the Lexiscape pursues questions of information at scale – both in terms of the size of the corpus and the scale of imperial production of information. In this presentation, we will discuss the Lexiscape’s specific challenges, arising from its analysis of East Asian-language materials, which will drive technological development. Much of our corpus is yet to be digitized and will be difficult to OCR accurately. In addition, most existing digital tools focus on processing European-language text and are not appropriate in their current forms to East Asian orthography. Yet the material opens up opportunities as well. The Lexiscape will exploit the analytical possibilities particular to the lexical layering of East Asian languages (Korean, Chinese, and Japanese). This project aims to advance the practices of text analysis and chart the linguistic landscape of occupation. Our presentation will introduce the initial conception of the project in WORD LAB, our developing methodologies, and future directions.

Cultural Production in the Islamic World (600-1900 CE): mining an Ottoman bibliographical collection from the early 19th century

Maxim Romanov

In the course of last decade, classical Islamic texts (predominantly in Arabic) have become increasingly available in full text format through a number of online initiatives and projects, such as www.alwaraq.net, shamela.ws,www.shiaonlinelibrary.com. The overall number of these texts exceeds 6,000 (over 800 million words), and the scholars of Islamic history have already realized the value of the newly available digital libraries. However, it is the new methods of text analysis that are truly capable of changing the state of the field, opening research opportunities that were unthinkable a mere decade ago. In the course of 14 centuries, Islamic authors compiled a great number of multivolume collections that often include tens of thousands of biographical, bibliographical and historical records. Modern scholars have been traditionally using such sources as references, but text mining techniques allow us to study these collections in their entirety, now truly exploring the longue durée of Islamic history and culture. Composed by the Ottoman officer Isma’il Basha al-Baghdadi (d. 1920), “The Gift to the Knowledgeable” (Hadiyyat al-‘arifin) is one of such collections. This book lists over 40,000 titles written by over 8,700 authors, who lived in the course of 13 centuries of Islamic history (c.600-1900 CE). Succinct records of this book, which can hardly be considered an exciting read, provide us with a wealth of details for time-series and social network analysis, offering us a unique record of the cultural production in the Islamic world that spanned from Spain in the west to India in the east, and from Yemen in the south to the Turkey in the north. Focusing primarily on this particular source, the paper will also offer glimpses into other similar sources, pondering on the implications of novel digital methods for the field of Islamic history.

Developing a Collaborative Pedagogy in the Digital Humanities 

Aaron Brenner, Matt Burton, Alison Langmead, and Aisling Quigley

Pedagogy in the domain of the digital humanities must confront several challenges, including definitional issues surrounding DH, striking a balance between theory and praxis, marshalling the necessary social and technical resources, and fostering collaborative, iterative, and interdisciplinary modes of inquiry. At the same time, and particularly in settings where a formal DH curriculum has not yet been established, these challenges are useful provocations to experiment with non-traditional pedagogical approaches. Presenters from the University of Pittsburgh will report on one such approach they developed during the spring of this year. The effort represents an intensive collaboration between faculty, graduate students, librarians, and postdoctoral researchers, and makes use of both an existing departmentally-based DH-focused lab in the Department of the History of Art and Architecture and an emergent digital scholarship program within the University Library System. Presenters, representing each of the academic roles involved, will discuss the goals, design, and implementation of the collaborative pedagogy, and will reflect on the strengths and weakness of the approach, as well as its applicability elsewhere.

Digital Humanities as Public Humanities: Digital Oral History Collections and Community-Engaged Undergraduate Education 

Charlotte Nunes

This presentation will theorize the implications of digital audiovisual archives for humanities scholarship and pedagogy. The proliferation of digital oral history projects available online means that they comprise an important category of new digital media, but there are few precedents for analyzing their contents and incorporating them into innovative research and teaching. Increasingly, universities and academic libraries support digital initiatives such as the University of Texas at Austin’s Human Rights Documentation Initiative (HRDI), which provides server space for large collections of audiovisual primary source material pertaining to issues of human rights internationally. However, despite the clear potential of the HRDI for compelling multimedia digital scholarship, there remain few models of student or faculty work incorporating the HRDI or comparable audiovisual archives. And while digital publications such as [in]Transition, a collaboration between MediaCommons and Cinema Journal, offer venues for scholarly works that analyze, argue, and present multimedia, multidisciplinary humanist content, pressing theoretical questions accompany such publication opportunities. What are the rhetorical challenges of incorporating audiovisual digital archives into literary analyses? What ethical concerns are raised by integrating ostensibly unmediated narratives (oral histories) with mediated narratives (film and fiction) in digital scholarship? Drawing on case studies from my Spring 2015 undergraduate English class at Southwestern University, “Freedom and Imprisonment in the American Literary Tradition: A Multidisciplinary Approach,” I will examine both the practical and the theoretical challenges posed by incorporating digital oral history collections including the Texas After Violence Project (TAVP), the StoryCorps Slavery By Another Name Oral History Project, and the Rule of Law Oral History Project. I will argue that building, analyzing, and enhancing access to digital oral history collections can transform scholarship and pedagogy in the humanities by providing unique community engagement opportunities for the wired undergraduate classroom.

Digital Mosaic, Intersection of Digital Design and the Humanities: Five Professors, Five Software Platforms, Fourteen Weeks 

Madis Pihlak

The Interdisciplinary Digital Studio (iDS) Freshman Seminar/Studio introduces five iDS professors and five software programs. The class is a form of extreme learning of complex software programs, which is possible when using high end Macintosh computers at the intersection of computers and humanistic learning. The Mac Interface is logical and consistent, allowing each student to concentrate on learning the software at hand. The teaching style is learning by doing. The course is project based. There are no formal lectures or exams. The focus is on each student learning the most they can learn. Each professor assigns a three-week digital design or interactive design project which introduces their expertise. The topics vary widely from Maya natural environment creation to paper prototypes of IOS games for smart phones and iPads. This seminar/studio is an introduction to the wide-ranging possibilities of Penn State’s Interdisciplinary Digital Studio Program. The objective of the iDS program is to immerse students in the possibilities offered by the application of a powerful and flexible digital studio practice and culture to an array of artistic, design, game and other digital media areas. Each student is encouraged to focus on their particular area of interest within the project structure proposed by each professor. The class is divided into five three-week sections, each led by a different professor. The Fall Studio is more of an introduction of each professor’s area of expertise and then the Spring Studio is a further elaboration of that area of expertise. New students can be admitted to the Spring class without prerequisites. The studio model is a hybrid of an architecture studio and an art studio where the professor interacts one on one with the student through desk critiques (or screen critiques) and with instructor lead interim and final presentations.

Documenting the History of Women in Higher Education 

Michael Tedeschi and Monica Mercado

The Women’s Education History Information Portal (a working title) provides access to digital versions of letters, diaries, scrapbooks and photographs documenting the first generations of women students attending the northeastern colleges once known as the “Seven Sisters,” the women’s college counterparts to the all-male Ivy League schools. These seven institutions educated many of the most privileged, ambitious, socially-conscious, and intellectually-committed women in the country during the nineteenth and early twentieth centuries, and sent their graduates into path-breaking careers in philanthropy, public service, education, and the arts. Through this project, our collective project team explored the complexities of building a rich collection of materials and objects from these institutions. Our panel will discuss the process of developing a successful grant; the technical considerations of building this type of project; our concerns and solutions to metadata importing; and the process of successfully managing a project across a range of geographically diverse groups. We will demonstrate the newly launched project to the group during this session.

Forums for Lost Innocence: Pickup Artists, Masculinity, and Mapping the Online Archives of Self-Transformation 

Anders Wallace

How are banal interactions of everyday life turned into an embodied archive of self-transformation? My dissertation research in the anthropology PhD program (CUNY Graduate Center) deals with the remediation of masculine gender through rationalized forms of intimate training in seduction skills with women, practiced among men (and so-called “pickup artists”) in groups called “seduction communities.” These groups exhibit a large digital footprint, and they raise the question of differentiating digital from real-world identities in the evaluation of expertise and the derision cast towards so-called “keyboard jockeys.” This presentation will address my analysis of seductive masculine embodiment in digital seduction manuals, autobiographical diaries, and online forums that are published and circulated in the public domain among participants in seduction communities. As a fellow at the New Media Lab (CUNY Graduate Center), I have been engaged in text-mining and topic-modeling within these different textual archives using the Mallet software program. Based on research conducted so far, I have created thematically-organized repositories in order to understand variability between chance, expressive control, and cognitive absorption in men’s understandings of their social interactions with women (and among other men) in their seduction training courses, as they strive to embody the social norms of pickup artists. Modeling topics and semantic co-occurrences in these textual narratives of becoming a pickup artist has allowed me to explore differences in users’ ideas about morality, vulnerability, intimacy, and identity that may contradict, or show ambivalence towards, the ideologies about masculinity, desire, and ethical behavior that seduction communities generally espouse. At the same time, digital data collection processes have revealed unexpected correlations among “missing” data: including relations of care, deference, and dependency among users of seduction communities. I would like to discuss some methodological challenges and opportunities I have faced in exploring these men’s digital life-worlds.

Goin’ North: Content Production in the Collaborative Classroom 

Janneken Smucker and Charles Hardy

The latest digital platforms enable university instructors to engage students in humanities content in new ways, allowing students to move from consumers to producers of content. West Chester University graduate students in Janneken Smucker’s seminar in digital history collaborated with colleague Charles Hardy’s undergraduate Honors College students and history majors on a digital project centered on the First Great Migration to Philadelphia, “Goin’ North.” First, students populated an Omeka repository with over 400 primary sources from regional and national collections including Temple University’s Special Collections and Blockson Collection, the Historical Society of Pennsylvania, Hagley Museum and Library, and the Library Company of Philadelphia. The students then created detailed indexes using OHMS (Oral History Metadata Synchronizer, from the University of Kentucky’s Nunn Center for Oral History) of 22 interviews from the 1980s conducted with African Americans who migrated to Philadelphia, animating them with photographs, newspaper articles, and GPS coordinates. Students used these same materials to create biographical sketches of the narrators, then worked in teams to create digital storytelling projects, utilizing a variety of platforms and media. Projects include a multimedia showcase of primary sources from Temple’s Blockson Collection; a HistoryPin tour guiding visitors on a journey based on the migration experiences of two families; a Creatavist story highlighting the differing experiences of Old Philadelphians and southern newcomers; a custom Google map presenting the paths upon which southern itinerant workers embarked; a Zeega presentation sharing the lives of three Black women who labored as domestic workers in White Philadelphians’ homes; and an exploration of the membership of the Citizens Republican Club, Black Philadelphia’s most important civic and social association in the early 1900s. In this presentation, professors and students will share our syllabus and resulting digital projects, evaluate the process, and assess the potential of the tools we used for future projects.

Growing & Nurturing digital scholarship through faculty, student, staff collaboration 

Janine Glathar, with Amanda Wooden, Amy Wolaver, Kevin Gilmore, Barry Long, and Brian Gockley

At Bucknell, we believe that applying digital scholarship methods broadens and deepens the learning experience for faculty and students alike - and greatly facilitates connections between research, coursework, and scholarly engagement that extends beyond Bucknell. But faculty often struggle to find the extra reserve of time, energy and resources required to invest in anything beyond the traditional norms of teaching and scholarship for their field. Digital scholarship is increasingly becoming an expectation in many fields. Recognizing that faculty need support to stay ahead of this curve, Bucknell created a digital scholarship team whose mission is to partner with faculty on integrating digital scholarship into teaching and learning. Having academic staff with expertise in digital methods can play a critical role in advancing digital scholarship on campus - often tipping the balance in whether faculty pursue digital projects. In this paper, we will look at models for faculty/staff partnerships - focusing in particular on partnerships that include undergraduates. These partnerships can be a boon for all involved, but often take the individual collaborators far outside of their comfort zones. Having clear lines of communication, a high degree of trust, and a sense of adventure are essential ingredients in creating digital projects that are meaningful, effective and transformative. To illustrate how these partnerships can work, we’ll deconstruct three projects:

  • Research on environmental activism that involved collecting, coding and analyzing qualitative/quantitative, spatial/non-spatial data using NVivo, SPSS and ArcGIS - and was later converted into a series of labs for a research methods class.
  • Research on healthcare utilization that involved statistical and spatial analysis/visualization - and was later published as web-based comparison maps and used for teaching purposes in a first-year seminar.
  • A new course co-created by faculty from music, engineering and humanities that was taught using a custom-built web-based map as an interactive textbook.

Inhabiting the Digital Umvelt - New Forms of Being and Playing 

Christopher Loughnane

The growth of digital lifeworlds, and the ICT-hosted khôra which provides the dwelling space in which they may exist, will continue to have an increasingly profound effect on our modes of being-in-the-world and our evolving methods of humanistic enquiry, challenging previous methodologies through digital innovations, whether new tools, technologies or techniques. The Estonian ethologist Jakob von Uexküll has had a significant impact not only in biological and ethological fields, but upon continental phenomenological philosophy. This paper will illustrate von Uexküll’s ideas and the generative offshoots of his ethological ontology so as to create a theoretical base upon which to examine not only how we exist in the analogue and emergent digital lifeworlds of contemporary existence, but also how we play with their associated cultural materials, those both born and made digital. It will discuss the fields of cultural ethology, biosemiotics, phenomenology and second-order cybernetics in order to explore how we interact and co-exist through sign play and informational exchange, and assess the impact on how we create and curate digital materials. In doing so, it will lay some of the groundwork for more detailed philosophical investigations into the digital environments created by and residing upon ICT technologies and the Internet.

It’s How You Play the Game: Playtesting and Game Design for the History of Medicine 

Lisa Rosner

What happens when we design humanities-based games? Are we gamifying the content or “educationifying” playful behaviors? This paper seeks to provide some answers (and justify the neologism) by describing the playtesting experiences of students at a range of levels with Pox and the City, a game based on the early history of smallpox vaccination. The paper will also discuss the impact of that playtesting on current design choices as we move forward with development of The Pox Hunter, an updated version of the game. In developing The Pox Hunter, a 3D Unity strategy game, my project partners and I are harnessing the technological power of digital games to present a key issue in the history of medicine: the interaction of disease entity, patient, and healer in the introduction of vaccination as a public health technology. This paper will explore how students “played the game,” focusing on their motivations, gaming styles, and choices. It will analyze the way in which design decisions affected both the presentation and reception of educational content. And it will explore age and gender differences in game play, and their implications for future game development in the history of medicine and science. The Pox Hunter is an innovative collaboration of humanities scholars, public science educators, and game specialists, funded through grants from the National Endowment for the Humanities. It draws on the resources of the College of Physicians of Philadelphia’s extensive historical medical collection as well as archival material from other Philadelphia-area institutions.

Keeping the “Humanity” in Digital Humanities Social Media Accounts 

Eric Ames

Everyone knows it’s a good idea to create social media outlets to showcase digital humanities materials, but does the sheer weight of available options competing for users’ attention make it seem impossible to craft an informative account with a unique identity? How can we use computer-driven outreach tools to convey the essential humanness behind the materials we’re promoting? And is it a bad idea to let your Tumblr account have its own sense of humor? This presentation will explore the ways Baylor University’s Digital Projects Group has created several popular and effective social media accounts, including two specialized Twitter accounts (@GWTruettSermons and @BUDailyHistory), a well-trafficked, in-depth blog (http://blogs.baylor.edu/digitalcollections) and a slightly irreverent, increasingly popular (currently at 4,000 followers) Tumblr microblog (http://baylordigitalcollections.tumblr.com/). Eric S. Ames, Curator of Digital Collections, will discuss crafting unique personalities for social media outlets, tailoring digital humanities resources to specific outreach tools and navigating the perilous waters of presenting accurate information in an engaging manner.

Legomenology: Tracing Aristotle’s Thought Process 

Tiffany N. Tsantsoulas, Christopher P. Long, and James O’Sullivan

Aristotle’s method has long been the source of scholarly interest and controversy. Precisely how he finds his way into the philosophical problems with which he engages, and what role common ways of speaking play in orienting his philosophical approach, are two questions that have shaped the debate. The working hypothesis of the present investigation is that Aristotle gains entry to the deepest questions of being and essence that animate his thinking by attending to the common ways people speak. We call this methodological approach ‘legomenology’, because it traces the logic of common ways of speaking to find its way more deeply into the nature of things. To further flesh out the legomenological nature of Aristotle’s method we have adopted a combination of philosophical and computational analyses. Using word frequency analysis techniques, we measure uses of the word ‘legetai’, which means ‘one says’, across Aristotle’s entire corpus, helping us to confirm our hypothesis about his rhetorical and philosophical method. Our frequency analysis is conducted using Python, using relative frequencies to account for any discrepancies in length. Initial results invited us to focus more detailed attention to the Metaphysics, where the question of being is most poignantly addressed in legomenological terms. The resulting graphs create a visual map of Aristotle’s ‘legomenology,’ which is an invaluable resource for deeper philosophical analysis. The following graph shows an example from the Metaphysics, where we have plotted the frequency of the relevant verbs over the course of the text: This short presentation will discuss how, by narrowing in on books 7-9 of the Metaphysics, we expanded the terms for computational analysis of the relevant cognates of legein to form an intricate map of Aristotle’s thought progression across these works. By correlating the instances of legetai, lego, legein, and legomen, we created an underlying structure for our philosophical reading of Aristotle’s methodology across each particular book.

Literary Periodization and the (D)evolution of Distinctive Gender Markers 

Sean G. Weidman and James O’Sullivan

A number of studies conclude that the use of function words provide a reliable basis for gender identification in writing (Argamon et al. 2003; Burrows 2004; Schler et al. 2005; Pennebaker 2011). David L. Hoover remarks that aspects of his own oft-cited gendered wordlist - a list of the most distinctive words used in the works of 26 male and female modern poets - are almost stereotypically gendered. Of course, Hoover acknowledges that his is a small study, and that future analyses of the gender-based differences in literary vocabulary would profit from a larger sample as well as an isolation of other potential variables, including historical period, nationality, and geographical location. While we admit that one can never have too robust a dataset for this type of macro-analysis, our project, building on the foundations set down by Hoover, analyzes gender markers within a selection of longer literary works from male and female authors over specified periods. Despite the numerous studies on the correlation of gender and writing style, none has, quantitatively, sufficiently addressed the evolution of literary styles over time. As such, we aim to distinguish how gender differences have changed over canonical literary periods. Focusing on distinct, rather than functional, word frequencies, we identify three consecutive literary periods - the Victorian, Modernist, and Contemporary periods - and gather a selection of typical works from the canonical authors from each of these eras to establish a number of chronological, literary snapshots of the evolution of stylistic gender markers in male and female authorship. Hoover draws his dataset from poets, but we elect to use the longer work of the periods’ authors (54 authors for a combined 212 novels and short stories) in an effort to draw from a more robust dataset. Like Hoover, we use the Zeta method for our analysis (Hoover, 2008; Burrows). Akin to Hoover’s study, when comparing authors from across all periods our results confirm that stylistic, and sometimes gender-stereotypical, trends emerge regardless of period, although the macro-separation may not be as pronounced as Hoover’s findings suggest. Interestingly, the crossover between the genders declines with each period - contemporary authors hence sharing the greatest similarities - which suggests that gender distinctions between authors is decreasing (see Figs. 1-3). Among other applications, such research provides an avenue from which to explore questions concerning the future of computational methods for contemporary authorship attribution or stylistic comparison, as well as a quantitative starting point for theoretical examinations of (d)evolving gender differences.

The Marenzio Online Digital Edition (MODE): Reading and Performing Music in a Web-Based Dynamic Environment 

Mauro Calcagno and Laurent Pugin

Under development by an international team of scholars from the US and Europe, with the support of the Kislak Center and the Price Humanities Lab at Penn, MODE introduces a new model for generating and disseminating modern editions of Western polyphony. By integrating musical philology and digital technology—and following the recommendations of the Music Encoding Initiative–MODE presents an online edition of the secular music by Luca Marenzio, one of the most important composers of the European Renaissance. New software applications for the optical recognition, superimposition, and collation of early music prints (Aruspix), and a new digital interface for the representation of music notation (Verovio), offer users a web-based dynamic environment for the study and performance of musical repertoires.

The Napoleonic Theater Corpus: towards a representative corpus of nineteenth-century French 

Angus Grieve-Smith

A corpus of texts needs to be based on a representative sample to justify generalizing any findings beyond the individual text, but there are numerous challenges to this (Biber 1993). For historical corpora we can add the challenges of working with archival texts. One way to deal with some of these challenges is to choose a well-defined sampling frame that lends itself to corpus compilation, while allowing the reader to judge its fitness for the task. One promising sampling frame is Wicks’s exhaustive list of all plays that premiered in public in Paris in the nineteenth century, totaling over 30,000. The first volume (Wicks 1950), covering the years 1800-1815, contains over 3100 plays. I have extracted a random sample of thirty-one plays, (one percent) from this first volume, and obtained copies of twenty-four of those. We may well expect a corpus based on a random sample to give us a different picture from previous corpora based on judgments of canonical worth, such as FRANTEXT (2014). This can be seen by brief investigations of well-known variables. For example, in the four theatrical texts for this period in FRANTEXT, 49% of negated sentences on average used ne … pas, 21% ne … point, and 30% ne alone. In four plays chosen at random from the Napoleonic corpus, there is 87% ne … pas (p = 0.0002), 4% ne … point (p = 0.0015), and 9% ne alone (p = 0.00007). The fact that these are random samples even justifies the use of p values in this instance.

The New Schoenberg Database of Manuscripts Project 

Lynn Ransom and Jeff Chiu

With a growing collection of over 220,000 records representing approximately 90,000 manuscripts, the Schoenberg Database of Manuscripts (SDBM) is the largest freely available repository of data on manuscript books produced before 1600. Compiled from data drawn from over 12,000 auction and sales catalogues, inventories, catalogues from institutional and private collections, and other sources that document sales and locations of manuscript books, it serves a wide range of users: an international body of scholars across the humanities, book collectors and booksellers, students at all levels, and citizen scholars interested in discovering and learning about the history of the book before print.

In its current form, the SDBM is a closed system, administered by a few. The proposed project showcase will demonstrate how the New SDBM, funded by a grant from the National Endowment for the Humanities, will open up access to individuals and institutions, giving them the ability to contribute, refine, and collect SDBM data to build a “metacatalogue” for finding and indexing the world’s manuscripts. The New SDBM aims to overcome the limitations of traditional approaches to union catalogues by providing an open platform for sharing data and expanding access. Data collection and refinement will be simple, flexible, inexpensive, and open to as many users and collaborators as possible, from the citizen scholar to the learned professor. The New SDBM aims to become a model for similar projects beyond the scope of pre-1600 manuscript collections, potentially transforming the ways in which historic documents are catalogued and researched on a global scale. To that end, our showcase will situate the growth of the SDBM within the broader Digital Humanities trends towards fostering and exploring the possibilities of aggregate historical data.

On Creating the Digital Joyce Word Dictionary 

Natasha Chenier

James Joyce invented thousands of words, most of which have yet to be thoroughly analyzed, and many of which still elude understanding. By first looking at the insufficient and flawed coverage of Joyce’s neologisms (words of Joyce’s invention) in the Oxford English Dictionary, and concluding that Joyce’s contributions to literature and to language are too vast for adequate inclusion in a historical dictionary, this talk posits that an online and open-access Joyce Word Dictionary is urgently needed. The Internet offers us new ways of thinking about and constructing reference works. Rather than being a costly dictionary such as the OED, which assumes an authoritative role in the process of defining words, the online open-access dictionary will enable multiple word meanings to democratically co-exist, meanings that will constantly be changing and developing with the help of Joyce readers, scholars, and lexicographers around the world. As part of this talk I will present a prototype of the proposed dictionary, as well as questions that arise when undertaking such a project, for instance: do we strictly include words Joyce invented, or words he used with new meanings? How do we go about identifying, defining, and interpreting the meaning of these words? What are the benefits and challenges of creating an open-access dictionary, as opposed to a conventional dictionary? Such questions will be asked in hopes of inspiring a generative discussion about how to best approach the process of understanding Joyce together.

Online Tool to Teach Ancient and Byzantine Greek Handwriting 

Pablo Alvarez

Students of the ancient and medieval world might find it difficult to engage with primary sources due to the lack of consistent opportunities for training in paleography. Indeed, this training is vital for the reading and understanding of documents and manuscripts. While there are some printed and online resources for the study of Latin and vernacular paleography, only a very few manuals are available for the study of the whole development of Greek handwriting from the ancient world to the Byzantine period. Provided with the necessary skills, students would be in a unique position to engage in original research based on the extraordinary collections held at the University of Michigan Library. In fact, the Papyrology Collection and the Special Collections Library are the repositories of the largest collections of Greek papyri and Greek manuscripts in America. We are currently designing an interactive online platform that will facilitate the teaching of Greek paleography, allowing students and researchers across disciplines to engage with the rich collections of Greek papyri and manuscripts held at the University of Michigan Library. Central to the platform is a database of images of ancient papyri and manuscripts, enhanced with digital tools that will help the user read the hand-written text. A detailed description of each document will give additional information about its content and historical background. The user will then be able to transcribe the text side by side to the image of the original document, with the computer showing which readings are correct and which one not. It will be also possible to identify all the subtle variations whereby a particular character is written on a given page. In the near future, other libraries and museums could also supply images from their own collections of papyri and manuscripts, encouraging collaboration between scholars, curators, and students.

The Politics of Text Mining 

Justin Joque

As the use of text mining and related methodologies, such as topic modeling and distant reading, expands in the humanities, we must think critically about the types of knowledge we are creating. Text mining is being both developed and deployed in a variety of disciplines, whose aims, epistemologies and politics differ drastically. In the digital humanities we hope to raise and answer new questions about corpuses and texts. At the same time, corporate computer scientists attempt to digest corpora so as to provide individual texts and advertisements at the appropriate time to Internet users. The security and counter-terrorism industries are simultaneously investing in these methods to identify individuals who pose a threat to the state and deal with an unprecedented amount of surveillance data. While the aims differ, all of these uses require both the availability of digital texts and computational methodologies to consume and analyze them. Thus, digital humanists find themselves utilizing methodologies, algorithms and datasets in close proximity to those used by the innovators of digital capitalism and the post-Cold War security state. This paper seeks not to condemn text mining on these grounds, but to raise a number of questions about its function in these varied disciplines. In utilizing text mining and associated technologies as humanists we find ourselves concerned about issues of network analysis, data collection, metadata, disambiguation and other concerns that resonate with those of the security state and capitalist enterprises. While these similarities are not reason to forgo text mining, it must raise questions about our relation to these types of data and methodologies. By placing our use of text mining in this broader context, we as a community can begin to develop critical apparatuses to address its uses and political implications in both the humanities and other fields.

(Re)orienting TEI in Composition 

Kevin Smith

I propose to present an approach to a first-year writing course that centers around XML/TEI as the primary method of composition. First, I argue that this method of composition will defamiliarize students’ notions of composing by changing their primary compositional tool from the word processor to an XML editor. I aim to leverage this defamiliarization to underscore that all texts are mediated and situated in very specific ways. Second, having students apply markup (which is essentially metadata) to their texts will promote a metacognitive awareness of the (often implicit) rhetorical choices that are made during the composition process. Julia Flanders has called this kind of awareness the “productive unease” that results from formalizing models of humanities data. My argument further draws on Wendell Piez’s concept of “exploratory markup,” which Piez uses to ultimately argue that generic markup languages like TEI function as a kind of rhetoric about rhetoric. Though this course is still in the planning stages, I built a proof-of-concept, using an initial foray into creating the customized TEI schema to showcase the apparatus of display. The proof-of-concept is available on my academic blog (kevingeraldsmith.com), along with a more thorough explanation of the theoretical underpinnings and goals of my project. This presentation incorporates scholarship from the Digital Humanities as well as from Composition and Rhetoric to intervene in both fields. For DH, the project highlights the field’s limited engagement with pedagogy, specifically classroom-based research; for Composition and Rhetoric, my project argues that the field would do well to more thoroughly examine the ways that DH methods can bolster our goals for student writing. With the forthcoming publication of Rhetoric and the Digital Humanities, edited by Jim Ridolfo and William Hart-Davidson, this interdisciplinary conversation is increasingly pertinent.

Scholarship as Public Practice: Cultivating a Digital Scholarly Community 

Chris Long and Mark Fisher

The Public Philosophy Journal (PPJ) is a born-digital ecosystem of accessible and rigorous scholarly discourse on issues of public concern that attempts to practice public philosophy by engaging in public practices of scholarship and publication. One of the main challenges of creating an online scholarly community and publishing platform like the PPJ lies in cultivating and maintaining the necessary conditions for public collaboration and constructive reviewing to take place between academics, activists, public policy makers and citizens more broadly. In this long presentation, we’ll discuss the progress that we’ve made in year 2 of the development of the PPJ. Specifically, we’ll look at design features of the platform that foster collaborative writing and the results of the face-to-face writers workshop we are holding in June 2015 in San Francisco. The PPJ is also leveraging new DH work in rhetorical analysis and computer-aided peer-to-peer review that will enable us to combine ‘what computers do best’ and what humanists do better in facilitating, indexing, and rewarding productive scholarly interactions within the PPJ network. We will discuss in particular the creation of the Collegiality Index designed to establish the conditions for excellent developmental peer review. Participants will be invited to explore the PPJ platform and to help the PPJ team further develop the policies, processes, and design features capable of creating the conditions for public engagement and scholarship to which the PPJ aspires.

The shawu150 Project: Viewing DH from an HBCU 

Desiree Dighton

While some digital humanists are beginning to explore issues of power in the technologies being used, it’s important not to stall the momentum of cultural inquiry on the genealogies of these tools. Focusing only on the cultural history behind tool creation fails to adequately address issues of accessibility and inclusiveness. To actively engage in the cultural critique of DH means exploring the implications and feasibility of digital humanities methodologies in non-dominant environments and by a more inclusive spectrum of participants. Shaw University is the South’s oldest Historically Black College and celebrates its 150th anniversary in 2015. Founded the year after the end of the Civil War, nearly a hundred years later, Shaw’s campus was site to the founding of the Student Nonviolent Coordinating Committee (SNCC) that led sit-ins across the South and was influential in the Voting Rights Act and other key moments in civil rights era. Presently, many students are unaware and disconnected from this history and the potential they represent in its legacy. The shaw150 project is about using technology to overcome barriers and provide students and alumni with a platform to connect and to tell the stories of where they came from and where they are now. The project began with several sections of the first-year, one-semester basic writing course and will go campus-wide and public this year as part of its sesquicentennial celebration. Utilizing mobile devices, Instagram, and an open source tool for harvesting and creating a web presence around tagged Instagram photos–relatively accessible technologies–students and alumni were able to represent themselves and share their images with other students, the university community, their communities of origin, and the public. Within the theoretical framework of Paulo Freire, shawu150 was a way for students to act together upon their environment to learn about their social reality and transform that reality through their own production. As a result, shawu150 succeeded in collecting an array of visual images charged with narratives of Shaw undergrads’ social realities, exposing larger questions of social divide that have plagued our country’s past and continue to flare in the face of its future.

Software Studies Initiative’s On Broadway Project: Data as Art 

Emilee Mathews and Sylvia Page

This presentation focuses on Dr. Lev Manovich’s current initiative, the Software Studies Initiative Lab and its suite of projects, particularly On Broadway. Equally at home in digital humanities, new media theory, and informatics, Manovich’s scholarly enterprise successfully unites disciplines that may not identify with these field-specific, at times politically charged monikers. But what changes when the projects are not considered as scholarship, but as works of art themselves?

To begin to answer this question, the authors will look at the Software Studies Lab oeuvre through the lens of art theory and practice. Manovich’s team curates political and cultural topics, creates tools for their analysis, and ultimately leaves use and interpretation of the data to the viewer. By contextualizing Manovich’s work in the discourse of contemporary art practice, we will interrogate its aesthetic, phenomenological, and experiential qualities. We will further consider it in the increasingly popular discourse surrounding artistic research - a trend whose rise has many corollaries to that of digital humanities.

Our presentation will examine the following questions: what happens when the analysis of cultural production is as beautiful as the subject being analyzed? To extend the long debated issue in the digital humanities, must one build to truly “do” DH - and further, does this creative aspect transform traditional humanist methodologies? Through engaging these disparate fields, the authors aim to provide a platform for transdisciplinary discourse that will amplify the evolving potential of digital humanities.

“Still Looking for You” place-based community engagement 

Julia Maserjian and Annie Johnson

In the spring of 2013, Lehigh University’s 2011-2013 Council on Library and Information Resources (CLIR) Fellow led a team to conceive of and implement an interactive web place-based web site, “Still Looking for You: A Bethlehem Place + Memory Project.” Working with a team of digital experts, Lehigh launched a prototype of the site in the fall of 2013. Since then, three graduate and undergraduate courses have used the site as their class capstone project. Beyond the campus community, we are reaching out to publicize and capitalize on the experience of Bethlehem residents and visitors. In this presentation we will discuss bridging gaps between student scholarship and interactive public humanities, the challenges of developing a site that appeals to a broad audience, and the lessons we learned from contributors. We are hoping that at least a few of the presentation attendees will be willing to post their memories of Bethlehem during the presentation discussion so we can show a live contribution to the site.

A Survey of DH Curricula at the Present Time 

Chris Alen Sula, Phillip Cunningham and Sarah Hackney

In the past few years, nearly two dozen degree and certificate programs have been developed in the digital humanities, with more announced each year. Existing studies have examined course syllabi and assignments (Terras 2006, Spiro 2011) and the development of specific programs (McCarty 2012, Sinclair and Gouglas 2002, Smith 2014). In addition, there are critical discussions in the field as to the role of common standards in digital humanities (Spiro 2012), the proper balance of skills and critical reflection (Clement 2012, Mahony and Pierazzo 2012), and relationships to the work force. To date, however, there are no systematic surveys of existing degree and certificate programs. Such studies would contrast earlier work, which often seeks the invisible structure of digital humanities as revealed through analysis of its disjoint parts, and offer an empirical perspective on debates about DH pedagogy. This presentation analyzes and visualizes the location, structure, pedagogy, and other features of formal DH programs, with particular emphasis on questions of disciplinarity, methods, courses, and skills. To reflect the broad and varied nature of digital humanities, this study uses the crowdsourced TaDiRAH (Taxonomy of Digital Research Activities in the Humanities) to code the activities, objects, and techniques referenced in the curricula and to present aggregate results about common goals and competencies of DH programs. These results are compared to text analysis results from program descriptions and documents, as well as previous studies of DH courses. This latter analysis explores the differences between “one-off” DH classes and sustained study of the field across successive, intentionally-grouped courses. The presentation concludes with critical reflections on these DH programs in light of pedagogical concerns expressed in the literature.

Tablets, Mobile Apps, and First Year Experience 

Mary Paul

The First Year Experience (FYE) program is a learning community of full-time freshmen students who are the first in their family to attend college and who need both math and English remediation. The improved learning outcomes and retention rates of FYE students at Fresno State have been well documented since the program’s inception in 2009. By including these FTE students in a tablet-based program which was launched in Fall 2014, our Institution has afforded these at-risk students another safety net for retention and success. The use of mobile applications in the classroom has resulted in a more engaged student-to-instructor culture. By utilizing specific mobile applications, instructors are able to model and offer feedback in real-time; a passive lecture environment becomes an active opportunity for student-centered productivity. Providing a digital device coupled with a data package creates a segue into more affordable course tools such as free electronic textbooks while allowing all students a voice in the digital classroom. Of particular interest to the composition classroom is the use of mobile apps such as Google Classroom, Google Docs and Explain Everything to facilitate learning as well as manage instructional feedback. Analyzing and reviewing student documents within an audio recorded screenshot video during class sessions allows for further explanation and understanding of instructor guidance for all students. The video becomes an available tool for students to review. The intuitive organization of documents using Google Classroom lends itself to a very streamlined and efficient mode of collecting, commenting and grading student papers, thus advancing a more manageable instructional workload. The use of mobile learning technology balances the academic playing field. While the “flipped” classroom and digital classroom pedagogy has been introduced and utilized for years, the complete immersion of tablet-based instruction by both educator and student provides an extremely powerful learning environment.

That’s like 100 in Internet Years: Lessons Learned from 10 Years of PhillyHistory.org 

Deborah Boyer

In 2005, the City of Philadelphia Department of Records launched the Photo Archives Website to support public viewing and geographic search of over 1,000 historic photographs taken throughout Philadelphia. The following year, that initial site transformed into PhillyHistory.org (www.phillyhistory.org), and the Department of Records embarked on an extensive project to digitize tens of thousands of additional images and maps from the collection of the Philadelphia City Archives. Ten years later, PhillyHistory.org features nearly 130,000 historic photographs and maps from five organizations, includes almost 5,000 data corrections submitted by members of the public, maintains a regularly updated blog with over eight years of entries, and has received several large grants. We’ve grown, weathered some storms, and adjusted to a whole new world of digital humanities work. What started as a small digital project has developed into an adaptable, collaborative initiative that takes advantage of new technologies, funding opportunities, and methods of public engagement. Maintaining a digital project for a decade is not easy, but PhillyHistory.org demonstrates the potential for digital humanities projects to have a continued impact. In this presentation, we’ll review the lessons we’ve learned throughout the years and how we’ve used grant funding, social media, additional technical features, continued digitization, and partnerships to adapt and maintain relevancy. We’ll look at what’s worked and what hasn’t and outline how small and large scale digital humanities projects can plan for a lengthy and rewarding future.

Towards a Taxonomy of DH Project Genres, Applied in Particular to Early Modern Studies 

John Theibault

As the number of digital humanities practitioners and projects has expanded, it has become increasingly difficult to maintain an overview of what kind of work is being done within broad chronological and geographical frameworks. One effort to bring order to the abundance of work has been to create community of interest sites based around broad chronological categories, beginning with NINES, moving on to 18th-Connect, and most recently expanding to MESA, all of which have been brought together under the umbrella of the Advanced Research Consortium. These communities combine several tasks of interest to scholars. By federating sites and building an integrated search function, they enable rapid access to several hundred thousand digital objects from sites throughout the world. By building a community of scholars and researchers, they make it possible to vet digital work to ensure that it meets certain standards of scholarship. This latter aspect of the projects is important for establishing standards of peer review in the digital humanities. Probably because of the nature of the search function and the interests of the founding scholars of these sites, the bulk of the sites federated under these banners would be considered “digital archives” in one form or another: either collections from a single institution, such as the Walters Art Gallery, or repositories of a single author or movement, such as the Walt Whitman Archive. Projects that are primarily interpretive, rather than editorial, make up a much smaller proportion of sites included in the ARC groups. Up to now, there has been a notable chronological gap in the MESA, 18th-Connect, NINES sequence: the early modern era from ca 1400-1700. That gap should shortly be bridged by the Renaissance Knowledge Network (REKn) currently being developed by Ray Siemens at Victoria University. Supplemented with the already existing ITER community at the University of Toronto, REKn has an opportunity to incorporate standards of peer review for both critical editions and interpretive projects in the digital realm and to explore ways that the integrated search function already in the Collex platform could be used for each. The purpose of this paper is to explore the range of digital projects concerning early modern Europe in particular (though the REKn project need not necessarily be confined to western works) and to develop a system for grouping them according to shared characteristics to see what challenges different genres of digital projects pose for peer evaluation and integrated search. I expect that the taxonomic categories of genres developed will also apply to other time periods and cultural traditions.

The Trees, the Forest, and the Passion for Prints: Network of Dutch Print Production 1500-1750 

Matthew Lincoln

The wealth of Dutch and Flemish engravings and etchings that survive from the sixteenth and seventeenth centuries present both a blessing and a curse for art historians. On the one hand, this abundance has fueled vital connoisseurial research that has given us crucial insight into one of the prime conduits for images across Europe in this period. However, case studies alone are insufficient for understanding how changing patterns of print production may have affected trends in subject matter, style, and even the artistic and intellectual status of prints. The sheer quantity and variety of prints from this period, and the number of individual printmakers and publishers involved, challenge traditional models of art historical argumentation. But what has presented an obstacle to traditional art historical narrative-writing is a boon for quantitative, computer-assisted research. Drawing on a database of more than 50,000 prints held by the British Museum, I analyze this dynamic network of interactions between designers, plate cutters, and publishers in order to clarify the sweeping changes in the way that printmaking partnerships were formed in the Netherlands between 1500 and 1750. Returning to the scale of the individual, I then examine how certain engravers’ positions within a larger network may have constrained or encouraged their decisions about whether to work independently or with a specific publisher, what subjects to specialize in, and what market to sell to.

Using Juxta in Translation Studies: or, Paleography and the digital versioning machine 

Katherine Faull

The use of Juxta (http://juxtacommons.org/) for the preparation of comparisons and collations of scholarly editions has recently been established (see, “Using Juxta Editions to Create Digital Scholarly Editions” exhibit at 2015 MLA). However, as this paper will show, critical Translation Studies can also leverage Juxta’s powerful software to help in the comparison of retranslations of a source text (for example, analyzing multiple retranslations of Homer). As Lawrence Venuti has shown, the study of retranslations reveals important cultural-political agendas within societal institutions of education and politics. The paper will also demonstrate how Juxta Commons can help to teach Translation Studies. In my advanced seminar in Translation Studies this spring the digital versioning machine demonstrates Antoine Berman’s twelve deforming tendencies of translation; using the side by side view of multiple versions of the same target text, student translators can see how word and style choices differ between various translators. The paper will furthermore show how I have used Juxta in my own work as both editor and translator of 18th century German manuscript materials. Working with papers that pertain to the history of the Moravian church, using Juxta has allowed me to analyze extant English language documents alongside my translations to aid in comprehension (where the text was corrupt), and significant editorial variations between the two documents. While editing and translating the strategically important Moravian Mission Diaries from Shamokin, PA (now Sunbury) that date from the mid-18th century, prior to the French and Indian War, Juxta has allowed me as a paleographer to follow a philological trail of clues, misspellings, and mistranslations and enter authoritatively into the debate on whether bison once roamed in the Buffalo Valley, the location of Bucknell University.

Visual Exploration of Medieval Textual Histories: the Case of the French of Italy 

Laura Morreale, Abigail Sargent, David Wrisley

Scholars interested in the history of French-language writing in thirteenth- and fourteenth-century Italy have previously approached the topic in one of two ways: either by examining one specific textual tradition, or by tracing French-language production within one Italian region, most often Venice. Neither “close” approach allowed for an understanding of how French was used at different times and places on the peninsula, or on the possible connections between various locales of production. This paper discusses how a spatial humanities approach attempts to close those gaps providing new ways of seeing this complex literary phenomenon. Members of the French of Italy project, based at Fordham University’s Center for Medieval Studies, have built a digital object using the Omeka/Neatline platform which incorporates textual, geographic, and temporal data about French-language writing in Italy. The French of Italy Time Map aims to provide geotemporal visualization of this textual phenomenon. By inviting users to interact with the data about medieval literary production across Italy in a form of map-reading (Tally), it reveals a larger picture of this moment in literary history than has previously been afforded by national philological traditions. The intention of this long paper, therefore, is twofold; first, to present some content-specific observations about multi-variant visual representations of textual production applicable to other abstract literary models at scale (Moretti), and second, to discuss the benefits and drawbacks of map/timeline curation as graphical representations of interpretation (Drucker). Its significance to the larger DH community stems from the overlapping interests in literary history, visualization and the Omeka/Neatline platform increasingly used within the scholarly community. The paper will be co-written in the spirit of collaboration at the Center by a student, a staff and faculty member.

Whither community? Data curation and public digital humanities 

Lydia Zvyagintseva

This presentation responds to the call for a renewed interest in the notion public scholarship by examining data curation practices in the digital humanities. Currently, many research institutions are involved in digital humanities research, including the production of digital editions of manuscripts, text-mining tools, database-driven interactive websites, gaming and virtual environments, mobile applications, and other textual, graphic and multimedia digital objects. Data curation is the process that addresses the challenge of managing the data produced as a result of this research through planning, selection, preservation, description, edition, sharing and reuse of this data over time. Yet how can data curation support the goals of public humanities to foster creation and sharing of knowledge beyond the academy? Drawing on theories of community-engaged scholarship, participatory culture and civic participation, this presentation evaluates several contemporary digital humanities projects and proposes a conceptual model for data curation work in the digital humanities based on the criteria developed as a result of an exploratory, grounded theory study. Ultimately, it argues for a new phase in the consciousness of the digital humanities: that of the critical awareness of issues around impact and community engagement in digital scholarly practice. Data curation can play a significant role in the area of public digital humanities through its focus on preservation, access and sharing of knowledge and by embracing the theory and practice of community engaged digital scholarship.

Who reads the EULA? 

Aaron Mauro

The relatively new genre of legalese called the EULA or End-User Licensing Agreement is as ubiquitous as it is unread. While the EULA for iTunes is a relatively small 19,080 words, the EULA for PayPal is currently 27,435 words in length. There are new online services being made available by large and small corporations every day, and nearly all of them require you to “agree to the terms of service” by clicking a button or checking a box. Many users, myself included, tend to assume that we are forgoing some portion of our online personal privacy in exchange for a service. In many cases, this a priori assumption is at best an unnecessary risk and at worst a subtle attack on our personal security and privacy. This latest incarnation of Franco Moretti’s “great unread” represents, however, an opportunity to offer a text analysis tool using many of the techniques and methods used by the digital humanities. This long presentation will share some of the findings of the initial research into the content of EULAs. It will describe some of the techniques - from topic modelling, tailor made Python scripts, and even simple Unix based programs like grep and awk - to quickly parse and identify suspicious content. The remainder of the presentation will describe future plans for this research and share mockups of potential avenues of deployment, including a simple website and web app or an integrated app on iOS and Android. An important aspect of this tool will be to blend an ease of use with clear insights into often intentionally opaque and vague language.

Panels and Workshops

Copyrights for Digital Humanities: A Primer 

Delphine Khanna, Brian W Boling, Matt Shoemaker

This workshop will provide a practical introduction to understanding copyright issues frequently encountered in digital humanities projects, when attempting to use primary sources created by others, like published materials, manuscripts, photographs, video recordings, social media content, etc. Participants will learn to navigate the Intellectual Property (IP) maze, and become familiar with concepts such as public domain, use-favorable licenses, fair use, and permissions. They will explore the societal context of IP, including risk assessment, the variety of stakeholders, cease and desist orders, and the notion of due diligence. The workshop will provide useful tools such as decision trees, and will include several hands-on exercises that allow participants to apply the concepts to real-life scenarios. The projects examined involve digital humanities methodologies and approaches like textual analysis, network analysis, online monographs, web exhibits using archival materials, and film and video analysis. The presenters have experience advising faculty, students and archive patrons on copyright issues, and/or have handled a variety of IP-related situations in the process of developing digital projects. Participants do not need a computer or any other technology for this workshop.

The Ethics of Data Curation: The Quandary of Access vs. Protection 

Diane Jakacki, Katherine Faull, Dot Porter, and James O’Sullivan

This panel aims to address the important issue of how we as Digital Humanities scholars negotiate and present the sensitive data (textual, archival, geospatial) that constitutes the core of our analysis. The public-facing nature of our work reveals significant challenges that have to do increasingly with access and ethics, and in many cases cause us to reassess how we conduct and disseminate our research. The three presenters work with distinctly different types of data: Dot Porter works primarily with data about medieval manuscripts: descriptions as well as digital images. Although in the past several years the number of manuscripts being digitized and the images made available on the web has increased dramatically, that data is not always (indeed, not usually) easy to access, and until very recently licenses have not enabled reuse of the data; James O’Sullivan applies computational approaches to the critical analysis of literary texts. When dealing with modern and contemporary works, for which duplicate data is not made available, peer-validation of any results is often difficult, and thus typically rare. Even before the analysis phase, researchers are faced with the temptation to acquire texts from sources that infringe upon intellectual properties. At present, there is little transparency as to the origin of datasets utilized by scholars in this field; Katie Faull’s work with cultural-historical spatial data in the Susquehanna river watershed has led to a role as mediator between Native American nations and Federal, State and local agencies, in the process raising ethical questions about how data pertaining to sacred sites should be protected while at the same time presented as part of important negotiations about conservation. In addition to presenting case studies, this panel will incorporate an open dialogue among attendees that addresses these issues across a broader array of research.

“Where are we and where are we going?” The Digital Humanities Initiative (DHi) at Hamilton College at Five Years 

Greg Lord, Lisa McFall, Angel David Nieves, and Janet T. Simons

The Digital Humanities Initiative at Hamilton College is five years and three Andrew W. Mellon Grants into developing interdisciplinary digital humanities scholarship in the liberal arts. Our goals prioritize 1) including undergraduates in the research process, 2) developing digital infrastructure/processes for collaboration and preservation of faculty research, and 3) balancing sustainability with innovation in digital research methods. DHi case studies will illustrate the models we have developed to sustain innovation and promote collaboration and provide the foundation for our plans for the future. Discussion of the American Prison Writing Archive, Sacred Centers in India, Soweto HGIS, and Dangerous Embodiments case studies will illustrate the range and types of collaborative activities we have engaged, the connections between traditional and innovative research in digital humanities, and the potential for digital approaches to elucidate complex issues in the humanities. We will explore exactly how this work is accomplished in collaboration with multiple experts, how and at what level undergraduate students are integrated into the research process, and the opportunities and challenges we see as we evaluate lessons learned over the past five years. Audience participants will be asked to define with us the challenges and opportunities we foresee within the digital humanities community. We will crowdsource ideas for meeting the challenges defined within this session and start the process of connecting people to take advantage of the opportunities.

Short Presentations

Birds Eye Review: Remodeling the Book Review 

Ruby Perlmutter

We generally take for granted the function of the book review, using it as a reflection of its subject. However, the genre has the potential to provide much more information and insight than we generally use it for. I believe that by applying digital methodologies to distant read large collections of reviews of one book, we can learn more about our collective reading and understanding. This new information can create a place for cultural criticism and cultural reflection. As records of cultural reception, book reviews can lend themselves to their own form of literary study.

With “The Bird’s Eye Review,” I gather as many professionally published reviews as possible for one book, and use topic modeling to look at the themes discussed. From here, I will do several things. First, the generated topics give some insight into what each reader (or in this case the review-writer) found important from the book in question. Since the collection shares a subject, we can use the reviewed work to decode these topics. The topics here serve as a new place for literary analysis. They can represent the thematic commonalities between reviews - as well as the differences - and indicate the social pertinence of those thematic commonalities. Further, this type of aggregation makes central the separation between the book and its cultural reception.

Finally, this process provides an opportunity for a new genre of review: a sort of “bird’s eye review” or review of reviews. My presentation will explore the possible methods for reading criticism to better understand audience.

Comic Book Ratios in Design 

Christopher Couch and Alexander Ponomareff

Our project concentrates on the production of comic books, primarily in New York, Connecticut, and Massachusetts, from the beginning of the medium in 1933 to 1941, the commencement of U.S. involvement in World War II. This period is characterized by growing production in the number of titles and individual printed copies of comic books, growing employment of a variety of specialists in the industry, and establishment of standardized sizes for the magazines themselves, zinc printing plates, display racks, and a sorting mechanism for the products into emerging and concretizing genres. The creation and production of comic books during this period was characterized by the concentration of creation and production of comics in linked industrial districts centered on Manhattan (both creation of intellectual and artistic material and printing), Connecticut’s printing industry, and distribution networks following rail lines and truck routes originating in Massachusetts, New York, and Delaware. The geographical deconcentration of production of the content of comic books has been examined by Norcliffe and Rendace (2003), but analysis of the opening phases of the creation of the industry remain to be done beyond anecdotal and documentary histories. Our project proceeds on two levels. The standardized production of comic books has looked at the creation of content in workshops characterized by a Taylorized division of labor, but no analysis has been done or commentary produced on the standardization of image areas and the tectonic structure of image components in comic book pages. We argue that this standardization is one element in a series of ratios based in industrial production in the printing and papermaking industries of the mid-twentieth century, and are proceeding with an analysis of the structure of paper, cameras, printing plates, and the rest of the technological matrix in which these magazines were produced. Data includes comic books themselves, technical manuals for printing, blueprints of presses, and other sources. The standardized products led to new forms of display, including racks, and we are analyzing this display from close study of vintage photographs of comic book points of sale, frequently reproduced as nostalgic items, but never analyzed in terms of comparison with other kinds of product sales displays, location of such displays. This data is admittedly limited, but can be paired with study of the ratios of genres produced in different years, data abstracted from sources such as the Overstreet Comic Book Price Guide. Finally we hope to argue that in Flusserian terms these industrially structured products are part of a series of innovative images, nested and concentrated in the pages of the books, expanded and growing more meaningful in the multiples of pages from the presses awaiting folding and cutting, ever-larger displays in specialized sales areas, and ultimately turned on the medium itself in the multiple cover displays frequently created for public viewing and reproduction through newspaper photographs and newsreels in the hearing rooms of state assemblies and the United States Congress in the era of McCarthyite suppression of the medium in the 1950s that led to the Comics Code.

Nature in American Realism and Romanticism and the Problem with Genre 

Martin Groff

This project makes use of corpus linguistics and algorithmic search applications - emergent tools within the digital humanities - in conjunction with traditional methods of research, to discover how digital technology can benefit textual and cultural analysis in literary studies. More specifically, my research explores changes in the role and portrayal of nature in American literature, from the sublimity of Romantic texts to what can be described as a “proto-environmentalist” approach in some Realist texts. My traditional research and analysis of Sarah Orne Jewett’s “A White Heron” reveals an interest in ecology and the preservation of nature. Digital technology, specifically the use of an algorithmic search application, allows for a better understanding of the broader implications of this revelation by analyzing the frequencies of “nature-words” and “civilization-words” in a commonly used anthology of Realist literature. Such a comparison demonstrates that, while nature remains a relevant theme in some Realist texts, its relevance is less consistent than the social themes indicated by “civilization-words.” This data helps us to understand the context in which Jewett was writing - but it also reveals a lack of homogeny in these texts, calling into question the relevance of broad categories such as “Realism.”

A Novel Approach to Enhancing Access to Visual Materials 

Diane Biunno

Visual materials support research and teaching, but are limited by issues of discoverability. Recently, the archival community has increased discoverability for many types of content, but enhancements are needed to make visual materials more accessible to patrons. Even discovery of digitized graphic items can be difficult due to lack of standardized description practices and the inherent challenges associated with describing graphics. The Historical Society of Pennsylvania (HSP) is engaged in a project called “Historic Images, New Technologies” (HINT) that is improving practices and tools for describing graphics by highlighting relationships between images and related content and sharing metadata through linked open data (LOD). The goals of the project include helping humanities-based organizations increase discoverability of digitized graphics and promoting linking and sharing of content among institutions and scholars. HINT will upgrade HSP’s existing open-source image viewer by developing its abilities to annotate images with TEI (Text Encoding Initiative) and output data in RDF (Resource Description Framework), while incorporating these features into Collective Access (an open-source digital asset management system). HSP will demonstrate the enhanced viewer in a digital history project featuring a number of TEI-annotated political cartoons. The proposed short presentation focuses on how HSP brings analog resources in published political cartoons together with technology to create a digital resource that will enhance learning and discovery. The session explores how HSP is collecting, linking, and sharing metadata; examines the challenges associated with the development of digital tools; offers perspective on infrastructure necessary for such projects; examines accomplishments and challenges; and offers recommendations to institutions considering such an endeavor. This presentation is valuable to those seeking ways of improving access to materials through creation of new digital tools and novel uses of metadata.

Strategic Planning for Digital Scholarship 

Laurie Allen and Mike Zarafonetis

In the summer of 2012, Haverford College Libraries began working through a three year strategic plan which called for a newly created digital scholarship team to support and partner with digital humanities and scholarship work on campus. As we near the end of our current plan, the Digital Scholarship group has supported over a dozen courses, has partnered with faculty on a variety of scholarly projects and grant proposals, and has created a huge range of internship opportunities for students. After a successful first three years, we are now tasked with creating the next three year strategic plan to build on past successes and incorporate lessons learned. As we move forward, the following questions stand out as important ones to address:

  • We primarily employed a “proof of concept” approach over the past few years to match our work to real demand. Now that we’ve learned more about this work, how can we be more systematic in our growth over the next three years while remaining nimble? How accurately can we anticipate the needs of faculty and students, and what can we continue to learn on the fly?
  • How will we assess the success of projects beyond usage statistics and partner satisfaction? What kinds of metrics should we rely on?
  • What processes can we automate or simplify, or simply let go of? Can we move some of the work done by Digital Scholarship librarians out through to all subject librarians? Where can we rely on consortial colleagues, and where can they rely on us?

We will present a short presentation reviewing our successes, failures, and lessons learned from our current strategic plan, and exploring ideas for supporting and engaging in digital scholarship for the next three years.

What DPLA can do for PA Digital Humanities: The Digital Public Library of America as a Portal and Platform for Teaching, Learning, and Research 

Kristen Yarmey

Launched in 2013, the Digital Public Library of America (DPLA) is flourishing as a discovery portal, bringing together diverse library collections (currently nearly 8.5 million items from over 1,300 contributing institutions) and significantly enhancing access to America’s cultural heritage. Thanks to its open and robust infrastructure, DPLA also serves as a platform for engaging with these aggregated collections via their associated metadata. As a result, while DPLA is still growing and developing, digital humanities scholars will already find it valuable not only for content search and retrieval but also as an inspiration and testbed for new applications of emerging visualization, contextualization, and other data analysis techniques. This short presentation by a DPLA Community Rep will introduce DPLA as an organization, discussing its mission, status, and future plans. A brief demonstration will highlight DPLA’s current user interface as well as third party applications that leverage DPLA’s excellent API. Finally, examples of DPLA-based digital humanities projects and resources will be shared, with an eye towards fostering teaching, learning, and research related to Pennsylvania history.

Project Showcases

Annotags: an Open-Source, Decentralized Textual Annotation Protocol 

Jonathan Reeve

The major currently available textual annotation systems—CommentPress, Annotation Studio, and Annotator.js, to name a few—suffer from centralized approaches to the storage and retrieval of annotations. The disadvantages of these approaches parallel those of the much-discussed “data silos.” Users of those systems that wish to comment on an electronic text must entrust those comments with the website that hosts the annotation software. Should that website go down, or should the user lose his or her institutional subscription, all of that user’s annotations are lost. What is needed is a rhizomatic annotation protocol whose data storage paradigm doesn’t depend on a single institution, and which can be disseminated across the Internet and across well-established social media infrastructures. Annotags is a protocol that has been designed for this purpose. It is a way to encode bibliographic and textual location information in a string of numbers and letters that are human-readable, easily typeable, and short enough for use as a hashtag on microblogging platforms such as Twitter. The scope of the annotation can be easily controlled by adding human-readable location markers. The annotag for Moretti’s Graphs, Maps, and Trees, for instance, is #iMeASm, and the annotag for page 19 of the same book is #iMeASm:p19. The anatomy of an annotag is simple. Figure 1 illustrates another example, the annotag #iXepzv:p54l15. The first letter here, i, denotes the type of bibliographic code that will follow. In this case, it is an ISBN, shortened by encoding it into base-64 with the help of a web application program for this purpose. In other cases, it could be an OCLC control number, Project Gutenburg etext ID, or Google Books ID. The following letters and numbers of the annotag are the bibliographic code, followed by a colon used as a separator. The final characters of the annotag represent the location of the text being annotated; in this case, it is page 54, line 15.

Bliss-Tyler Correspondence 

Lain Wilson, Sara Taylor, and James Carder

Covering fifty years (1902-1952), the Bliss-Tyler Correspondence is a born-digital publication of approximately one thousand transcribed and annotated letters between the founders of Dumbarton Oaks, Mildred Barnes and Robert Woods Bliss, and their friends Royall and Elisina Tyler. Robert Bliss served as a diplomat to Argentina, France, and Sweden, and Royall Tyler served on several economic reconstruction and development commissions. Mildred Bliss and Elisina Tyler were both intimately involved in wartime charities, and their letters reveal extensive networks of cultural figures, including author Edith Wharton. Readers journey with the correspondents in their travels and glimpse the changing political and economic landscape of Europe before, during, and after the two World Wars. They encounter a cast of secondary characters, including artists, authors, curators, and politicians. Finally, the letters allow readers to trace the development of the art collections at Dumbarton Oaks and the founding of the institution in 1940. The project was designed to provide multiple entry points for users. Historical introductions to each section lay out the context for groups of letters. A faceted search allows more targeted investigation; for example, users may search for all letters that mention certain individuals, themes, places, and artworks. The Bliss-Tyler Correspondence is a unique resource for scholars of Dumbarton Oaks’s areas of study (Byzantine, Pre-Columbian, and Garden and Landscape), of twentieth-century institutions, including the League of Nations and the United Nations, and of elite culture and networks. Moreover, the personal vantage on events that the letters provides is invaluable for members of the public interested in life in Europe and the Americas during the rapidly changing first half of the twentieth century. This project is Dumbarton Oaks’s first born-digital publication, and in chronicling the institution’s origins, it provides a model for its continuing contribution to the humanities in the future.

Blue Mountain Springs: Tapping a Reservoir of Humanities Data 

Clifford Wulfman and Natalia Ermolaev

This short presentation will provide an overview of Blue Mountain Springs - an application programming interface (API) to the Blue Mountain Project, a digital resource for the study of historical magazines of the literary, visual, and musical arts from the late 1800s through the middle of the twentieth century (http://bluemountain.princeton.edu). Blue Mountain currently provides access to its resources through a conventional digital-library interface, with a searchable/browsable catalog, a full-text index, and a web-browser-based page reader. Information scientists and digital humanities researchers increasingly want to bypass reader-oriented interfaces and access full-text data directly and programmatically for use with their own analytical tools or with web tools like Voyant. Blue Mountain Springs makes Blue Mountain an abundant source of clean data by providing an application programming interface (API) to Blue Mountain’s metadata and machine-readable full-text transcriptions. Blue Mountain Springs is designed as a Resource Oriented Architecture in which Blue Mountain’s contents are abstracted as a collection of web-addressable resources that may be accessed, programmatically, in a variety of formats. Services like Blue Mountain Springs erode the distinctions between digital libraries, data sets, and databases. Blue Mountain Springs helps humanities researchers by making it easier for them to get texts into text-analysis tools. By presenting Blue Mountain as a collection of resources, we allow users to construct URLs that point to specific items or data sets of interest, like a particular article or set of articles, a set of contributors, or a set of places mentioned in a set of texts, and that deliver the data in formats that may be piped into their own analytical tools, or into off-the-shelf tools like Mallet, or into tool suites such as Voyant or NLTK.

Cinemablography 

Fabrizio Cilento

From an ordinary stack of ungraded film analysis papers was born an idea: what if, instead of just writing about film production theory, students could collaborate to demonstrate what they had learned by turning papers into films? Cinemablography is the Messiah College Communication Department’s experimental answer to this question. Initially comprised of student-produced and directed interviews, retrospectives, travelogues and examinations, Cinemablography has developed into a semesterly showcase of student work from various disciplines in the Communication Department. It serves as a digital archive of film exploration through collaborative student efforts under the direction of Fabrizio Cilento, beginning with “Mapping the 2000’s.” This issue of the site represents the effort to document and critique cinematic tendencies of the millennium thus far - a potentially infinite project, and an important starting point for greater conversation. Projects focused on dissecting the history of the 2000s while highlighting the major innovations of the biographical, aesthetic, technological, and economic aspects of the industry. Examples include Kathryn Bigelow’s iconic filmmaking style and subsequent Academy Award; Banksy’s rogue cultural commentary in Exit Through the Gift Shop; Christopher Nolan’s cinematic reimagining of Batman’s dark themes; Pixar’s complete overhaul of computer animated storytelling; and Martin Scorsese’s tribute to George Méliès: the film Hugo, which masterfully blended old and new. Due to the labor-intensive nature of this work students must sacrifice quantity for quality, limiting major site updates to about once per semester. Because film culture changes so rapidly, however, it became apparent that an ongoing, short-term social media and blog component would enhance the overall goals of Cinemablography. Updated six days a week by student writers, the blog exists to foster discussion of current films and topics relevant to film and digital media. It reviews trailers and film scores for their effectiveness and merit; highlights the work of particular directors with in- depth examinations of their careers; and selects films made prior to the year 2000 to review for their importance and relevance to the modern film landscape. It also discusses short films, behind the scenes footage, and industry news through a developing presence on social media platforms such as Facebook and Twitter. While the blog serves to publicize and maintain the Film Department’s contributions to the online community, students are preparing to launch Cinemablography’s latest innovation, Next: A Visualization of Science Fiction. In contrast to the previous issue’s broad scope, this project will focus on the narrower field of “science fiction versus science fact.” It will highlight the ascendance of the genre into the mainstream; innovations by filmmakers who have redefined genre themes; and the longstanding (if not always recognized) influence of science fiction on virtually all aspects of modern storytelling. Cinemablography is poised to bridge a gap between popular culture and academia, and the students who feed its ever-expanding catalogue hope that their work contributes to a larger conversation.

Crowdsourcing In Theory and Practice: Lessons from The Boston Bombing Digital Archive 

Jim McGrath and Alicia Peaker

In May 2013, the web site for Our Marathon: The Boston Bombing Digital Archive, went live (www.northeastern.edu/marathon), inviting visitors to share their experiences during the 2013 Boston Marathon bombings and their aftermath. Inspired by earlier digital humanities projects that created publicly-accessible archives of cultural memory out of crowdsourced material (The September 11 Digital Archive, The Japanese Disaster Archive, The Hurricane Katrina Digital Memory Bank, among others), Our Marathon’s project members and collaborators frequently told potential contributors uncertain about the value of their own narratives that “No Story Is Too Small” (a message the project eventually printed on its marketing materials). In this project showcase, Our Marathon’s Project Co- Directors will document and assess in detail the material realities, online and offline, of building, promoting, and preserving a digital archive founded on the potential of crowdsourcing. Areas covered will include candid assessments of the benefits and drawbacks of Our Marathon’s decision to build our site with Omeka, considerations of how best to balance a project’s ambitions with considerations of available financing, labor, and institutional support, and a detailed examination of the mechanics of our collaborations with external partners. Specifically, we will take a granular look at the planning involved in Our Marathon’s “Share Your Story” event series, which spanned several months and entailed the participation of local libraries (including the Boston Public Library) and colleges (including MIT) in and around the Boston area. We will discuss experiences with prototyping and conducting trial runs (on events and the site itself), highlight the lessons learned from helping event attendees navigate the design of our site and its contribution plugin, and suggest ways of attending to the perceived needs of the communities involved in crowdsourced efforts. We hope to situate our particular initiatives within the context of contemporary efforts in building archives out of crowdsourced digital material. More broadly, we expect that our discussion of the practical considerations involved in the construction and implementation of a digital humanities project of this nature will appeal to audiences interested in questions of project management and public outreach.

Developing the Elements of “Learning As Play:” an Interactive Digital Project 

Sandra Stelts, Linda Friend and Carlos Rosas

This presentation showcases the essential features of a collaboratively developed scholarly website and research corpus, “Learning As Play.” Originating from a faculty member’s research, it illuminates a distinctive three-dimensional publication format in its historical context. In practice, it has also provided curriculum exploration and a real-world opportunity for visual arts students and faculty in an interdisciplinary studio to explore technological solutions that simulate interactive movement on the screen. The project brings together an archive of selected 17th- to 19th-century movable books by and for children with the theme of transformation (metamorphoses), along with an extensive set of complementary materials including partner libraries’ scans, a map view, and searchable bibliography of extant objects (http://sites.psu.edu/play/). The original versions of metamorphic pictures are generally made on a long sheet of paper that is folded twice horizontally so that the top and bottom meet in the middle and create two flaps, and then are folded and cut vertically to create a number of panels. Lifting the top flap and then lowering the bottom flap creates three different scenes for each panel. Relatively few examples have survived in physical form, and the site provides a fresh means of access to both the content and the mechanics of movement while preserving the fragile originals. Developers employed the Unity 3D gaming engine for educational purposes beyond entertainment, simulating movement as though hands are holding the original. The presentation will explore the challenges and rewards involved in creating and sustaining a complex digital project. This type of “gaming approach” could be explored for many other complex projects, including paper dolls, interactive games, architectural toys, anatomical drawings, and other disciplinary content, and we are interested in continuing a dialogue with other developers and curators to explore possibilities for re-use of the technology and to stimulate further work in this area.

Digital archive of personal narratives about mental health 

Elisabeth Muldowney

This project includes the creation of a publicly available digital archive of personal narratives about mental health, focusing on eating disorders. The personal narratives will be available in a variety of formats - text, video, audio - that together provide family members insight into the suffering of loved ones. The archive has the potential to help those family members recognize important triggers that may lead to more advanced problems. Mental health challenges like eating disorders occur frequently in highly result-driven communities, including dance, sports, acting, modeling, and, as recent research has shown, many forms of religion (Spangler and Queiroz 84). These environments can be categorized as “result- driven” because they associate success with aesthetics or outwardly measurable achievements (Sherman and Thompson 23). While American popular culture explores how pressure in some of these settings aggravates mental health concerns, many popular narratives portray the behavioral aspects of eating disorders in a manner that is both limiting and destructive. Behaviorally focused narratives can trigger the behaviors they portray (Lavis 12). The digital library will concentrate on the development of mental health problems, their emotional repercussions, and the healing process. The stories in the digital library will include perspectives from individuals who struggle in result-driven communities, particularly exploring the impact of various religious cultures on mental health. The execution of this project is as follows:

  • First stage: conducting interviews
  • Second phase: transforming interviews into brief narratives, using various media and compiling these narratives into a public digital platform
  • Third phase: administering a brief survey of the public in order to ascertain their reaction to the archive

By using narratives that are accessible to people of all backgrounds, the project’s aim is to help family, friends, and the broader community understand the challenges many silently face.

Digital Harrisburg: Mapping the Pennsylvania State Capital in the Early Twentieth Century 

David Pettegrew and Albert Sarvis

The increase in public access to federal census data and digitized historical maps has created an unprecedented ability to visualize the changing social and physical fabric of America’s cities through time. In this presentation, we showcase a new public history project centered on Harrisburg at turn of the twentieth century - at the moment when a grassroots campaign of beautification transformed a dirty industrial town into a splendid capital city with extensive green spaces, upgraded sewage systems and pavement, and a booming population. To document this change, faculty and students from Messiah College and Harrisburg University of Science and Technology partnered to create a high-resolution map of the urban social and physical environment. History faculty and students keyed the federal census data for 50,167 individuals living in Harrisburg in 1900, while GIS faculty and students digitized the residence polygons of a contemporary atlas. The resulting product - an interactive map of the city in 1900 - encodes demographic data at the level of individual address and allows the visitor to search for relatives. As the Digital Harrisburg working group continues to input the 1910 population and plans to add data for successive decennial census years, the project has tremendous potential for public engagement. In this project showcase, David Pettegrew and Albert Sarvis will demo the interactive map and outline current activities and plans for extending the project. Our ultimate goal is to integrate historical records, demographics, and geospatial data to illustrate the tremendous social and physical changes in the capital city at an important moment in its past. We hope that the Keystone Digital Humanities Conference will provide a first round of critical feedback on the “unrealized” public potential of this large dataset.

Early Modern Manuscripts Online (EMMO), Folger Shakespeare Library 

Paul Dingman

This showcase for Early Modern Manuscripts Online or EMMO, (funded by a three-year grant from IMLS in its initial phase) will consist of a short overview of the project and a demonstration of the Dromio software program along with its external web page. Our approach to online transcription in EMMO pairs high-resolution images of manuscript pages from the Folger collection (written in archaic scripts) with facilitated XML coding. Other transcribing projects have now partnered with EMMO and are using Dromio as well. Transcribed text appears in a familiar word-processing-type editor in the HTML window of Dromio along with buttons that activate certain effects for encoding (e.g., expanded text appears as italic with specific expansions highlighted in color, cancelled text as struck through, etc.). The XML window in Dromio contains encoding tags in angle brackets. After this introduction, the presentation will shift to a “lab” phase in which anyone present who has a laptop, and is willing, can try the interface out themselves by transcribing/ encoding part of an early modern manuscript page. Working individually or in teams, we will all concentrate on the same manuscript page yet produce independent transcriptions. Expertise in early modern English paleography is not be a requirement (though certainly welcome). After five minutes or so, we will stop and demonstrate the collation/comparison features of the program as well as how project paleographers vet multiple transcriptions to arrive at an approved version. Questions welcome at the end.

The InstaEssay Archive 

Jonathan Fitzgerald

In 2014, a number of writers and photographers began to utilize Instagram beyond its common use as an application that enables the creation, stylizing, and sharing of personal photographs to a particular group of friends and acquaintances, and rather as a journalistic tool. In particular, writers like Jeff Sharlet and Neil Shea have paired their photos with short narratives, constrained to 2200 characters by Instagram’s caption limit. The effect is similar to that of “Flash Fiction” - short, impactful self-contained stories - except that these stories are true and paired with a photograph of their subject. There are a number of benefits of writing for a social medium like Instagram, among them the ease with which writers can create and then instantly share their work with a wide audience. But there are also associated problems, including the scattered and ephemeral nature of the medium. These issues, among others, are what prompted the creation of the InstaEssay Archive - a place to gather and save these pieces so that the individual works can be read both as unique essays, and also as a part of a whole emerging genre. I propose to present The InstaEssay Archive in a project showcase at the Keystone Digital Humanities conference in July. At present, the archive is online and includes, in addition to a catalogue, built using Omeka, of over 200 essays, map and timeline features, and a network visualization illustrating the interaction between some of the more prolific InstaEssay authors to date. It would be my pleasure to showcase this living archive at the conference.

Interactive presentation & project demo for Influence Network of Literary Authors 

Andrea Hansen

As an important aspect of the humanities, literature is intrinsically influx, taking on new values throughout time. Authors’ influences on each other are an integral part of its evolutionary progression and a key factor in piecing together defining characteristics of an author’s work. Influence Network of Literary Authors supplements research on the components of an author’s oeuvre, as it provides access to the continuum of authors’ influences. Additionally, the network can be used as second best to receiving recommendations from a favorite author. Influence Network of Literary Authors is an interactive network visualization derived from curated SPARQL queries of ‘influenced’ and ‘influenced by’ DBpedia properties for authors of various subdivisions. To structure data for the network, I used linked open data queried from DBpedia’s crowd sourced ontology, which is extracted from the content of Wikipedia. Working within the Resource Description Framework, I stored extracted properties in an Excel spreadsheet before using Python command line to run a script. Using Gephi visualization data analysis tools, I ran and modified the graph to meet the represented specifications. Oxford Institute Network Visualization Example is the host for the network. The curation of the network was solely made up of literary authors, however, in some cases SPARQL queries returned the influence of songwriters, philosophers and psychologists. The goal of the network curation was to include a representable portion of influential authors, with the purpose of collecting communities based on their influences of and by each other. While the visualization displays color-based modularity partitions representing only 19 communities housing 3677 authors, the graph reinforces the prominence of influential ties within the literary world.

“Lord, Don’t Forget About Me”: Saving America’s Black Gospel Music Heritage with Baylor University’s Black Gospel Music Restoration Project 

Eric Ames

Baylor University’s Black Gospel Music Restoration Project (BGMRP) was born from frustration, found a home in an unlikely place and has grown organically through the contributions of major donors, private citizens and fortuitous garage sale finds. From its genesis in a faculty member’s cri de cour for a vanishing area of American heritage to its slated inclusion in the Smithsonian Institution’s National Museum of African American Heritage and Culture, the BGMRP (www.baylor.edu/library/gospel) is realizing the incredible power of grassroots collectors working in partnership with an academic library to bring digital humanities resources out of the close - or basement, or attic, or storage building - into the homes of thousands of users around the world every month. Curator of Digital Collections Eric S. Ames will discuss the origins of the project, its rise to national prominence on the strength of compelling stories and favorable media coverage, and the impact of preserving and presenting incredible rare audio recordings of America’s black gospel heritage via the Baylor University Libraries Digital Collections. He will also explore ways the BGMRP has impacted academic research, student scholarship and capital improvements since its formal launch in 2006.

Metadata as an Analytical and Writing Tool 

Rachel Kantrowitz

My paper will discuss my use of metadata to analyze my archival sources and structure the written argument of my dissertation. I used DEVONthink Office Pro to create a text- searchable database of my sources. My database allowed me to go beyond the original organization and taxonomy of the archives I visited. This is an important epistemic move for anyone, but especially those of us who study colonial and postcolonial history. Further, organizing my sources in this way facilitated and structured my writing process using Scrivener. Often my metadata tags became the subject of sub-sections of my chapters. My paper will provide a step-by-step explanation of how to create such a database and transition from the database into structuring your writing. I will conclude with a broader reflection on how this type of technology impacts our work as scholars.

Multicultural, Bilingual, and Interactive Arabic and Hebrew Digital Edutainment: A Digital Project at University of Pennsylvania 

Abeer Aloush

Studies have shown that every fraction of a second of gaming requires the player to learn something, whether hand-to-eye coordination, virtuoso-like skills related to pressing specific keys, or even game-related information. In other words, learning is definitely not a side effect of playing videogames. The envisaged project provides a schematic overview of alternatives to edutainment for learning the Arabic and Hebrew alphabets. First, the project will expand on the concept of building a global community through humanizing digital alphabets in these two languages by giving them new depth; second, it will expand the experience of personal growth through requiring conscious reflection on language by trying to push the boundaries of gaming. The educational approach will build a ludic methodology. The learning process will be done through rote learning, mechanical training, drill-and-practice tasks, and instilling knowledge into the learner’s mind—practices that reveal a particularly evident reference to the core of a behaviorist theory characterized by the “repetition-reward-reinforcement†pattern. For instance, the learner will learn how to read a Judeo-Arabic manuscript thanks to practice, repetition, and reinforcement. Through reiterated routines and practice, learners are eventually conditioned to respond in a way to a certain stimulus. To achieve this goal, I will create a set of picture-based adventure games at different levels for educational purpose such as platformer games, spaceship shooters, space adventure games, and physics games based on point-and-click games and multimedia software. The software will run on Windows, PSP, iPhone, and iPod touch. Also, other games will be created to test learning such as interactive crossword puzzles, jigsaw puzzles, word searches, hangman games, sliding puzzle games. The software will be accessed through Digital UPenn Libraries and can be embedded with social media.

The Story of the Stuff: Using Digital Humanities & Interactive Storytelling to Explore Memorialization & Grief in the Information Age 

Ashley Maynor

An investigation into America’s obsession with temporary memorials, The Story of the Stuff is an interactive web documentary and digital humanities project that tracks what happens to more than half a million letters, 65,000 teddy bears, and hundreds of thousands of other packages, donations, and condolence items sent to Newtown, Connecticut, in the wake of the Sandy Hook School shooting. In this project exhibition, filmmaker and digital humanities librarian Ashley Maynor will present a guided tour of the project (scheduled to launch in April 2014), which exists at the intersection of art, humanities scholarship, library and information science, and transmedia storytelling. ** In the wake of September 11th, the Virginia Tech massacre, and many other recent American tragedies, thousands - even millions - of Americans have expressed their grief and sympathy by sending letters, cards, and other condolence items to the affected communities.Why? To what end? And what happens to all of the stuff? This interactive web essay explores this modern phenomenon by tracing the history of so-called “spontaneous shrines” in America and tracking Newtown, Connecticut’s present-day struggle to cope with an unprecedented outpouring of donations and packages following the Sandy Hook school shooting. Filmmaker, librarian, and former Virginia Tech professor Ashley Maynor investigates how we mourn and grieve in a world where these tragedies are experienced first-hand, once-removed through minute-by-minute news coverage of these events. The project website takes the form of scrolling multimedia essay and also includes supplementary educational and case study materials to facilitate large conversations about how local tragedies have become global ones and how the tidal wave of “grief materials” may pose an added burden for the recipient community. As an interdisciplinary digital humanities project, The Story of the Stuff equally explores the mysterious, often consumption-driven ways we express remembrance and grief in an age of instant information.

The Vault at Pfaff’s 

Edward Whitley and Robert Weidman

The Vault at Pfaff’s is an online collection of primary and secondary source documents about the community of bohemian artists, writers, journalists, actors, and critics who joined with Walt Whitman at Charles Pfaff’s beer cellar in midtown Manhattan during the years 1855-65. While the Pfaff’s bohemians have always held a small place in American literary history thanks to their association with Whitman, there are many unanswered questions about the impact that they had–either as individuals or as a group–on the literary, artistic, and theatrical culture of the antebellum United States. The mission of The Vault at Pfaff’s is twofold: (1) to gather and organize documents about the Pfaff’s bohemians–we currently have some 8,000 records in our database–in order to facilitate scholarly inquiry into this moment of American literary history; and (2) to create interactive maps, timelines, and biographical sketches that will introduce undergraduates and non-scholarly readers to the broader issues surrounding the first appearance of cultural bohemianism in the United States. While our first mission is to facilitate research by scholars and graduate students, our second mission is to create something akin to a public humanities gallery exhibit. The majority of the work that we have completed on The Vault at Pfaff’s since 2004 has focused on our first mission; as we began the next phase of work on this project in 2014 our attention has turned more toward the second. As such, we are working to construct sections of the site in ways that will make them similar to gallery exhibits one might associate with public humanities projects in that their goals are not to provide an exhaustive repository of texts, but rather to introduce scholars, students, and interested readers to somewhat broader topics. We have conceived of these sections of the site as having more of a storytelling function than a research function. Our project demonstration would introduce participants at the Keystone Digital Humanities Conference to the ways in which we have conceived for the differing storytelling and research functions of the site.

“Why Should I Care?” - Students and Online Primary Sources 

Margaret Graham and Matt Herbison

The Legacy Center Archives at Drexel University College of Medicine will present and invite discussion on our website Doctor or Doctress? (DoctorDoctress.org). The website, launched Summer 2014, is phase one of a project to build from the ground up a system that integrates (1) an archival digital collections system with (2) an interpretive layer that supports critical and historical thinking skills for students and educators. Among repositories that offer digital collections with pedagogical goals, the educational component is typically developed as a supplemental resource and the resulting integration of the two is minimal. Doctor or Doctress addresses this disconnect while encouraging students and educators to encounter the little-known histories and unique perspectives of individual women entering the medical profession in the 19th and 20th centuries.

Using the open source Islandora (islandora.ca) collections management and digital preservation framework, Doctor or Doctress provides users with enhanced metadata and surrogates that give novice researchers the content and context they need to piece together historical stories in authentic ways. Supporting material like “why this matters” metadata, headshots of document creators, document-specific prompts, and document surrogates available as scans, transcriptions, and audio transcriptions (read by high school students) mitigate stumbling blocks to analyzing and evaluating primary sources while not spoon-feeding students.

Stories and features were vetted by gathering student and teacher input throughout the planning and implementation grant process. Since the release of Doctor of Doctress, our monitored testing and in-the-wild feedback-collection have shown that users appreciate the support the website provides and are able to analyze sources in justifiable and meaningful ways. Anecdotal feedback suggests that scholars also welcome the increased exposure and interpretation. Although a work in progress, we were honored to win the 2015 American Library Association’s ABC-CLIO Online History Award.