With all kinds of digital technologies becoming available, the uptake of digital research methods by the humanities might have been inevitable. How the humanities can incorporate digital tools, and contribute to the development of technology aimed at the humanities were questions central at the DHBenelux conference (12&13 June 2014, The Hague, the Netherlands). Around 180 attendees met to discuss research projects presented in 50 presentations, 16 posters and 10 demo’s.
This blogpost is not intended to provide a complete overview of the conference, but rather to show the discussion from my perspective. The main theme I will follow is that of the humanities grasping technology; referring to 1) taking technology, 2) embracing technology and 3) coming to an understanding of technology.
The conference started with a keynote by Melissa Terras, director of the UCL Centre for Digital Humanities, who focused on what sparked her and other scholars to enter the Digital Humanities. Besides perhaps being a bit apologetic DH-is-a-real-thing, what was interesting was how Terras described that DH was inevitable. With the growth in computing power, the drop of computer prices, and the growing connectedness as people are increasingly online, the uptake of computers was not a matter of “if”, but of “when”. The result of this adoption is not in changing methods and questions, as the methods of DH fit in a long history of humanities scholars since the 16th century, but in the scale at which the humanities can investigate research questions.
What also sparked my interest, is the diversity of Terras’ research projects. In each of these projects, she grasped available technology in order to see how it would stimulate the humanities or cultural heritage: crowdsourcing, tablets and image processing were experimented with. In this sense, I think an interesting attitude can be distinguished in DH: seeing what technology is out there, and grasping it to see what can be done with it. Such experiments lead to further insights into what is possible, and spark new research questions. However, to fully implement technology in the humanities’ practices, a deeper integration of technology is needed.
When confronted with the limitations of available information sources, technology provides new means of overcoming these. DH starts at the data to be investigated: Mike Kestemont explained that without proper text extraction, there is no search, no stylometry, no other NLP processing of (historical) texts (Kestemont et al., 2014). Yet text extraction of historical texts can be difficult; since spelling keeps changing, it hard to train computers to recognize words (Hendrickx & Reynaert, 2014), a problem Kestemont tries to solve in his research.
A limitation of humanities data is often that its siloed in collections. The best approach to this limitation is what I also described in a previous blogpost: connecting collections as linked data. In the panel session, Saskia Scheltjens claimed that libraries are not about collections, but about linking things (hear, hear). There seems to be an increasing awareness of linked data in the DH community, as there were several applications of it with a catalogue Dutch books linked with a catalogue of forbidden books (Beek et al., 2014). Another project focused on connecting all kinds of databases to investigate how certain plants spread as medicines across the Western world (Klein et al., 2014). In our own presentation, linked data is used to facilitate large-scale and longitudinal analyses of political debates (Kemman & Van Aggelen, 2014). In our Talk of Europe project, the proceedings of the European Parliament are published as RDF; scholars can extend these proceedings with additional information and build tools for analysis. Since every scholar has a different research question requiring different tools and linked collections, no portal is developed. Instead, data is provided to allow scholars to create tools themselves. Encountering limitations of available possibilities, technology is grasped to empower scholars. However, one question we received was whether humanities scholars are actually able to interact with linked data through SPARQL-endpoints. Do scholars understand how to work with this technology?
While Bas de Haas introduced the term ‘simplexity‘ to describe a tool for the general public with a really simple interface even though the system behind it is really complex (De Haas et al., 2014), the presented tools aimed at scholars focused more on transparency. In the panel session Lars Wieneke voiced the concern that digital tools not merely bring positivism to the humanities, but obscurism, where the tools do not bring insight into how information is processed. Several approaches to address this obscurism were presented; during the panel, Stef Scagliola proclaimed the incorporation of digital methods in the universities’ curricula, also for the humanities. In his presentation, Marijn Koolen presented the project Coding the Humanities (Hoogstad & Koolen, 2014), where humanities students and scholars are taught to code. Building the tools needed to perform the research should be part of the research; the tools should be part of the argument. Moreover, learning to code need not be difficult, it’s just another language like Latin or Greek, which humanities scholars usually do not worry about as much. However, during the panel, I brought in Alexis Lloyd’s argument (Lloyd, 2014):
[W]hen I have a conversation with a friend, I don’t know his whole psychological history or every factor that goes into his responses, let alone what’s happening at a neurological or chemical level, but I understand something about who he is and how he operates. I have enough signals to participate and give feedback — and more importantly, I trust that he will share information that is necessary and relevant to our conversation.
I agree that in order to allow digital tools to facilitate research, scholars need to grasp how they work. However, this does not necessarily mean scholars need to be able to code and engineer the tools themselves. They do need to have an understanding of the tools’ assumptions and limitations, and have an idea of how data is processed into results.
Beek, W., Hoekstra, R., Maas, F., Meroño-Peñuela, A., & Leemans, I. (2014) Linking the STCN and Performing Big Data Queries in the Humanities. Presentation at DHBenelux conference, 12&13 June, The Hague, The Netherlands.
Klein, W., van den Hooff, P., Wiering, F., & Pieters, T. (2014). TIME CAPSULE: making digital cultural heritage data accessible and applicable for humanities research. Poster at DHBenelux conference, 12&13 June, The Hague, The Netherlands.
De Haas, B., Pedro Magalhães, F., ten Heggeler, D., Bekenkamp, G., & Ruizendaal, T. (2014). Chordify: Chord transcription for the masses. Presentation at DHBenelux conference, 12&13 June, The Hague, The Netherlands.
Hoogstad, J.H. & Koolen, M. (2014). Coding the Humanities. Presentation at DHBenelux conference, 12&13 June, The Hague, The Netherlands.
Hendrickx, I., & Reynaert, M. (2014). Language adaptability and performance evaluation of historical text normalization tools VARD2 and TICCL. Presentation at DHBenelux conference, 12&13 June, The Hague, The Netherlands.
Van Aggelen, A. & Kemman, M., (2014). Talk of Europe – Linking European Parliament Proceedings. Presentation at DHBenelux conference, 12&13 June, The Hague, The Netherlands.
Slides @ SlideShare | Poster @ figshare
Kestemont, M., de Pauw, G., van Nie, R., & Daelemans, W. (2014). Towards a General Purpose Tagger-Lemmatizer for Pre-Modern Dutch. Presentation at DHBenelux conference, 12&13 June, The Hague, The Netherlands.
Lloyd, A. (2014). In the Loop: Designing Conversations With Algorithms. Superflux blog. Retrieved June 16, 2014.