Digital Humanities NYUAD Year in Review 2017

“What exactly has happened to the study of the humanities in the digital age? To answer this question one need only review the last thirty years and remember how scholarship used to be carried out. In order to find books and articles, we had to look through various catalogs (card, National Union) as well as printed bibliographies. Fledgling institutional digital catalogs existed, but hardly contained everything we needed. Few journals offered digital access to publications. A researcher’s data was often stored on a desktop computer, or even just in paper copy on a shelf. At conferences, we arranged photographic slides in a carousel to project them on the wall.

In today’s connected world a stunning variety of virtual, networked resources are now available to researchers: electronic books and other platforms for document delivery, digitized archival collections, new environments for scholarly communication and web publishing, open data repositories, even cloud and high performance computing. Not all humanists are using these resources, but increasing numbers are, and as a result, our scholarly work is taking on a diversity, and creativity, of new forms. The transition to an era of “software intensive” humanities-it is, after all, a slow change-is bringing about new possibilities for trans-disciplinary scholarship. But what are the implications of more machines in our profession? Are we ready to confront the challenges and the results of such research? How many of us actually understand how to navigate these new data-rich environments to our benefit? …”

The rest of the NYU Abu Dhabi Digital Humanities Year in Review 2017 document can be downloaded here.

Semi-Automated Alignment with iTeal

Semi-Automated Alignment of Text Versions with iteal

A half-day tutorial at DH2018, CDMX, Mexico, June 2018


Stefan Jänicke  @vizcovery

David Joseph Wrisley  @djwrisley




Our half-day tutorial for DH2018 concerns the semi-automated alignment of different witnesses in complex textual traditions, with demonstration of specific use cases, a discussion of the relevance of the implemented system to particular textual problems relevant to the participants as well as a hands on discovery of the system. Alignment is a relatively simple task for modern languages with orthographic stability and relatively similar texts, but when there is a degree of instability of textual transmission as in oral literatures, popular music or poetry, or other complex texts with partial repetition the task becomes more difficult. Whereas methods of hand aligning and visualizing texts exists in TEI, we focus on the possibility of computational alignment for the purpose of exploratory textual visualization. Scholars who are interested in visualizing scaled forms of reading will be interested in this tutorial.

Our visual analytics environment iteal supports the computational alignment of textual similarities and is not English-specific. It was originally implemented using orally inflected medieval French poetic texts (with test cases of the fabliaux and epic) and so is known to work on texts in Latin alphabets with inconsistent orthography.

This half-day tutorial aims at introducing iteal to the DH community for which the questions of multi-text problems, spelling variance and debates about distant forms of reading are currently quite salient. Many language processing and visualization tools do not work well with languages beyond English. Our environment is known to work with languages beyond English will be of interest those interested in expanding innovative techniques in the textual humanities across the North/South divide. Participants of the tutorial will be led in a step-by-step, hands-on approach through the full cycle of an iteal-based text alignment workflow, and they will finally have the opportunity of testing the tool with their own data. Although proven to be effectively useful for text variants of medieval poetry, we will not focus only on this type of text as iteal can be used to determine alignments among texts of a different kind in any language and in multiple genres. Currently, iteal works with plain text in utf8.


iteal consists of two major modules:

First, it automatically determines line-to-line alignments pairwise between all given text editions based on user-configurable parameters including:

  • Edit distance: Variant spellings are taken into account by this function. We define two words as spelling variants if they have the same first letter, and if the string similarity of the remaining substrings is higher than a user-configurable threshold.
  • Coverage: In order to ensure that a specific proportion of words of both lines are aligned, the user can configure a minimum coverage value of the line.
  • N-grams: The user can configure the minimum required n-gram size n that is the largest number of subsequent word matches of both lines.
  • Broken n-grams: Quite often, the only difference between two lines is a single word in the middle of a line that is either inserted, synonymous, or a transposed stopword. Large n-grams, from this perspective are not achieved. Thus, we allow the user for considering broken n-grams, which is the total number of word matches among both lines.

Second, for the purpose of analyzing the determined alignment we provide interactive visualizations for different text hierarchy levels (examples for all three views can be found in Figures 1, 2 and 3, and a teaser outlining a brief workflow with iteal can be found at

  • Distant Reading: In order to get a rough overview of alignment patterns throughout the observed text versions, we draw a miniature representation for each version in the form of a vertical bar reflecting its number of verse lines in contrast to the other shown versions. For us, this is the most distant form of reading, where the text itself is not visualized, but rather abstract depictions of textual similarity point to patterns worth discovering.
  • Meso Reading: Since multiple texts are displayed in synoptic views, the visualization is able to convey more complex patterns of textual relationship. We call this a meso reading that might be said to connect multiple close readings all the while transmitting information that lies beyond the scope of a close reading. Here, we use the intuitivity of stream graphs to connect aligned verse lines among different versions. For a more detailed inspection of an individual alignment, clicking on a stream opens a popup window for line-level close reading.
  • Close Reading: Next to plain text, the close reading view provides word level alignments for the corresponding verse lines in the form of two Variant Graph visualizations. Within the close reading view, individual alignments can be confirmed with user input, so that it gets persistently stored in the backend.

Target audience: Anyone studying variance in the textual digital humanities and its visualization would be interested in our tutorial. It will be offered in English, but can accommodate textual data in a variety of languages. Potential participants in the tutorial are encouraged to be in touch with the presenters in advance of DH2018 to provide some sample data that can used to provide a mashup. Required for this step is a version of at least two documents sharing some text in common, of at least 20 lines. 


Schedule #itealDH


Part I (1 hour + break time)

iteal introduction: purpose, functionality, configuration, visualization (Stefan Jänicke)

– Medieval French poetry as an iteal use case (David J. Wrisley)

– Further use cases, future work, questions (Stefan Jänicke & David J. Wrisley)


Part II (2 hours – break time)

– Step-by-step hands-on session with texts brought by tutorial participants
– wrap up, feedback and steps forward




Stefan Jänicke

Dr. Stefan Jänicke is a post-doctoral researcher at the Image and Signal Processing Group at Leipzig University, Germany, where he leads a text visualization group focusing on applications in the digital humanities. Over the last years, he has gained experience in developing information visualization and visual analytics techniques within a number of digital humanities projects. His PhD thesis investigates the utility of visualization techniques to support the comparative analysis of digital humanities data, and his current research relates to information visualization with a focus on applications for text- and geovisualization in digital humanities.

David Joseph Wrisley

Dr. David Joseph Wrisley is Associate Professor of Digital Humanities at New York University Abu Dhabi. His research interests include the creation of open, inclusive corpora in medieval studies, corpus-based geovisualization as well as visual exploration of variance in poetic traditions. Furthermore, he is interested in the challenges in humanities data stemming from both multilingual environments and social data creation.


Related References 

Jänicke, A. Geßner, M. Büchler and G. Scheuermann (2014). Visualizations for Text Re-use. In: Proceedings of the 5th International Conference on Information Visualization Theory and Applications (VISIGRAPP 2014), pp 59–70.

Jänicke, A. Geßner, M. Büchler and G. Scheuermann (2014). 5 Design Rules for Visualizing Text Variant Graphs. In: Conference Abstracts of the Digital Humanities 2014.

Jänicke, A. Geßner, G. Franzini, M. Terras, S. Mahony and G. Scheuermann (2015). TRAViz: A Visualization for Variant Graphs. In: Digital Scholarship in the Humanities 30, suppl 1, pp i83–i99.

Jänicke, G. Franzini, M. F. Cheema and G. Scheuermann (2015). On Close and Distant Reading in Digital Humanities: A Survey and Future Challenges. In: Eurographics Conference on Visualization (EuroVis) – STARs. The Eurographics Association.

Jänicke and D. J. Wrisley (2016). Visualizing Mouvance: Towards an Alignment of Medieval Vernacular Text Traditions. In: Conference Abstracts of the Digital Humanities 2016.

Jänicke and D. J. Wrisley (2017). Visualizing Mouvance: Towards a Visual Analysis of Variant Medieval Text Traditions. In: Digital Scholarship in the Humanities 32, suppl 2, pp ii106–ii123.

Jänicke and D. J. Wrisley (2017). Interactive Visual Alignment of Medieval Text Versions. In: IEEE Conference on Visual Analytics Science and Technology, IEEE VAST 2017.




#myDHis messy, or an Ode to Untidy Bricolage

#myDHis messy, or an Ode to Untidy Bricolage
DHSI 2017 Institute Panel, Perspectives on DH

David Joseph Wrisley 
New York University Abu Dhabi 


messy < mess (n):  Old French mes “portion of food, course at dinner”
early 15c “company of persons eating together”
1530s  “communal eating place” (military)
1738 sense of “mixed food,” especially for animals
1828 “jumble, mixed mass”
1834 “state of confusion”
1851 “condition of untidiness”
1903 “excrement of animals”


Example 1  Between languages: assessing translation variance

The Transmission of an Arabic wisdom text, the Mukhtar al-Hikam in medieval Europe (From Arabic to English, via Spanish, Latin and French) – alignment using LF Aligner

messy issue: few literary problems correspond to available data












Example 2  Multilingual realities: documenting and mapping multi-script polyglossia on the street (

messy issue: reality is messy, social creation of data adds new untidy levels













Example 3  Orthographic variance

messy issue:  teaching a computer to recognize a pattern with a language where irregularity is the norm

sample medieval French word (“alms” in English): almosne, aumosne, aumone, haumone, asmone, esmone, aumorne

sample medieval French place (Almeria, Spain):

Aumarie Amarie
Almarie Aumarie
Almerie Ammarie, Aumarie
Amerie Aumerie
Almarie Armerie;Aumerie;Omarie;Aumarie
Almarie Aumarie
Aumarie Ammarie
Almaria Aumarie;Ommeria

Example 4   Aligning orally-influenced texts inside the “same language”: (with @vizcovery)

messy issue: pre-modern transmission of texts is messy, sometimes like re-mixing, add orthographic instability










Example 5  Expanding the language of DH to Arabic (with @najlajarkas1).  See post.

messy issue: computational linguistics with Arabic text is not done in Arabic by most of the world; finding a language for a nascent community to use

Twenty Things to Know about #DHIB2017

Twenty Things to Know about #DHIB2017                                                 

DHIB 2017 (@DHIBeirut, is a moment in my career that I will look back on fondly.

I have made a list of twenty things to know about the event that took place in Beirut 10-12 March 2017, ten that I think others will be interested in, and ten personal ones.

  1. It was not the first successful international digital humanities event that has taken place in the Arab region. It was the second.
  2. It represents the convergence of two different Andrew J. Mellon Foundation funded initiatives: the Center for Arts and Humanities at AUB and the AMICAL consortium.
  3. Countries represented at DHIB 2017 were Egypt, UAE, Lebanon, Ghana, Kyrgyzstan, Pakistan, France, Switzerland, Greece, Germany and Italy.  These are the countries of the institutions represented.  There were more nationalities present.
  4. It was the first time, to my knowledge, that instructors working in the Arab world–North Lebanon (Balamand), Beirut (AUB), Cairo (AUC)—taught DH topics together in the same venue.
  5. The participants included local universities, research centers and institutes, as well as digital humanities specialists from international organizations: IFPO, OIB and DiXiT, international libraries (Halle) and DH groups (Bard).
  6. Instructors included librarians, full-time and part-time faculty, IT and an English major.
  7. Participants included librarians, full-time faculty, IT, graduate and undergraduates.
  8. The digital humanities conversation has piqued the interest of the Centers for Teaching and Learning in the region and beyond.
  9. The courses on offer represented a spectrum of topics important to our local “big tent”:  Drupal, mapping, 3d, sound, Arabic OCR, Sustainable Text Workflows, Omeka, game design, digital pedagogy, digital editing, etc.
  10. We were able to offer the Institute at no cost to the participants.


Ten reasons that I loved the 2017 edition of DHIB 2017:

  1. I witnessed my fellow faculty, instructional designers and students make DH their own.
  2. The mother of an undergraduate student of mine took my workshop to find out what he has been talking about all this time.
  3. My keynote was live streamed and notes for several courses are available online.
  4. I listened to our second keynote speaker Ghassan Mourad, author of the first book about DH in Arabic, speak in Arabic about named entity extraction in Arabic.
  5. We have the best (multi-script) logo of any of the DH events I have attended (designed by @kyraneth). Available here with a CC BY-NC-SA 4.0 International license.
  6. One of the participants in my mapping workshop grasped the idea of the experimental nature of the DH projectvery quickly.  He went looking for data, dug into my professional website and made a map of my recent professional engagements.
  7. Both the Office of Information Technology and the Library at AUB were actively engaged in the Institute.
  8. The lightning talks were effervescent: bubbling over with practical ideas, obvious cross-institutional partnerships and feasible projects.
  9. Our closing session was held en plein air on the 2nd floor balcony of Fisk Hall, one of the heritage buildings on AUB’s green oval.  I was very pleased with the engagement of the participants.
  10. I learned so much from others.


General information on DHIB and DH at AUB : We began with informal events in 2011 that brought together the departments of English and Computer Science at AUB, with the support of some key people on campus who believed in the endeavor.  In 2015 we hosted the first DHIB (documents about that event are archived here). We became part of the Digital Humanities International Training Network in 2015. Other DH institutes have received participants from our institution: Oxford, Leipzig, Victoria.


Articles written about DHIB 2017:

AUB Faculty of Arts and Sciences
igital Humanities Institute Beirut 2017 – a Review





Abstracts, American University of Paris, March 2017



David Joseph Wrisley @DJWrisley
American University of Paris
16-17 March 2017


Lecture: “Digital Project-Based Scholarship and Pedagogy in the Liberal Arts Institution”
Thursday, March 16, 2017, 1530-1700, Combes 102    Watch the lecture here.

My talk focuses on the genre of the digital project and its potential for scholarly and pedagogical reflection in the liberal arts institution.  From a general discussion of some exemplary projects carried out in small colleges by teams of faculty, students, librarians and technologists, in what might be called the humanities “laboratory” (Lane), I will chart how digital methods can evolve from course-embedded experiments to larger research projects.  I hope to show that such projects, in both process and product, embody the values of a liberal arts education in the 21st century: a well-rounded education, social and ethical awareness and creative, multidisciplinary synthesis.  I will discuss in detail two course-embedded digital projects that I carried out with my students in Beirut: Linguistic Landscapes of Beirut and Mapping Beirut Print Culture.  As we will see, projects, like the scholars and institutions that embark upon them, grow in stages of increasing digital scholarly complexity (ILiADS).  Finally, I will point to some attempts to build “communities of practice” among liberal arts colleges, and the establishment of lab-like commons and other institutional structures that serve as the loci for such project-based local knowledge production.


Hands-on session: “Toolkit or Toychest?: the Digital in the Classroom”
Friday, March 17, 2017, 11h00 – 14h00, Combes 104

This hands on session will put into practice some of the ideas laid forth in Thursday’s lecture.  It will look at some simple, off-the-shelf tools for digital tasks, and move on to more complex (or even combined) tasks that are useful for collecting, analyzing and disseminating research data. The session aims to make participants aware of some of the emergent categories of tools for research & pedagogy, as well as to discuss the degrees of openness that they embody.  The session argues for the productive tension between the functional (the tool) and the ludic (the toy), suggesting that the digital does not simplify or merely quantify, but rather opens the door to critical play and reflection with tools.  Participants will try out basic functionality of some of the following environments and will discuss together how they might be integrated into critical classroom praxis: Voyant, TypeWright, FromThePage, Prism,, Google Fusion Tables, TopoText, Odyssey.js, Palladio, NodeGoat, Sketchup, JSTOR analyze,  

Helpful, but not necessary, preparations for the workshop: Make accounts at, Google (if you have a gmail it is enough), Carto.  Download Sketchup and either Zotero standalone (and its Chrome plugin).




Additional Reading:

Bilansky “TypeWright: An Experiment in Participatory Curation
Doueihi, Pour un humanisme numérique
Dumouchel, “
Les Humanités Numériques: une nouvelle discipline universitaire?
dwhly, “Annotation is Now a Web Standard
El Khatib et al., “TopoText: Interactive Digital Mapping of Literary Text
Ferrari Nieto, Enrique. Resistencias con lo Digital
Gefen et al., “Qu’est-ce que les humanités numériques?
Jannidis et al., Digital Humanities: 
Eine Einführung
Lane, The Big Humanities
Liu, DH ToyChest
Mounier, ed. Read/Write Book 2: Une introduction aux humanités numériques
Nowviskie, “How to Play with Maps”
Numerico et al, L’umanista digitale (Eng tr. The Digital Humanist: A Critical Inquiry)
Burdick et al, Digital_Humanities
Zotero: A Guide for Librarians, Researchers, and Educators
Rockwell and Sinclair, Hermeneutica: Computer-Assisted Interpretation in the Humanities
Sketchfab, “Around the World in 80 Models
Svensson, Big Digital Humanities
Unsworth, “Scholarly Primitives