Recently a student approached me with extensive feedback on my course Doing Digital History of last year. The short summary was that he liked me as a teacher, that he liked the structure of the course, but that he disagreed with the learning objectives of the course. We eventually had a discussion about what teaching digital humanities (DH) should be about, and the up- and downsides of different approaches. In the end, his disagreements went down to three assumptions I made that lie at the core of my course. As many universities are developing courses in digital history or digital humanities, I thought it would be interesting to lay out my assumptions and his objections as a student. If you have any feedback on my assumptions, please put them in the comments!
This year marks the fourth annual DHBenelux conference, which cycles through the Netherlands, Belgium, and Luxembourg. This fourth instalment will be held in Utrecht (the Netherlands), and last week the review process was finished and authors were contacted about the decisions. This provides me the opportunity to write down an analysis of submissions to DHBenelux 2017. For previous years, see blogposts related to 2016 and the period 2014-2016. Below I will look at the submissions, authors, and keywords.
Next week I will be visiting Rome to join the Associazione per l’Informatica Umanistica e le Culture Digitali (AIUCD) conference which will be held from 26-28 January at Sapienza University. See the entire programme here. The topic of the conference is “Il telescopio inverso: big data e distant reading nelle discipline umanistiche”, and as a result Mark Hill and I have formed a panel on big data, distant reading, concept drift, and digital history. In this blogpost I’ll post the abstract of the panel, and my own abstract; if the full proceedings including the abstracts of the other panel members are online I’ll add it to the presentations page. We are excited to have brought together scholars working on concept detection, ambiguity, and methodology of history, so we hope we will get a very nice discussion going.
Today I received an email from the university library that as of today, we have to register with the library before we can download academic literature. The reason is that the Consortium Luxembourg wants to track usage statistics to determine the financial contributions from each Consortium member. The university librarians gave two solutions, either to use the university search engine, or to manually change the url to include the proxy information. Neither solution is particularly user friendly, but as luck would have it, the latter gives us the possibility to create a bookmarklet that gives you one-click access.
Using the bookmarklet
- Drag the below text “A-Z Access” to your bookmarks.
- Look up a paper that isn’t open access (even though it should be), such as this one of mine: http://dx.doi.org/10.1007/978-3-642-40501-3_46
- Click the A-Z Access bookmarklet in your bookmarks
- Login to your A-Z.lu online account (once you’re logged in this will be skipped automatically)
- You will be taken to the page where you can download the paper (if A-Z has access to it of course)
The bookmarklet works very simple, it looks at the hostname of the current window and adds the required proxy url bit. Many thanks to Redditor Untgradd who gave the solution to add the proxy after the TLD (the .com bit) rather than at the end of the entire url.
In his recently published book The Big Humanities: Digital Humanities/Digital Laboratories (2017, Routledge), Richard Lane promises to discuss the digital humanities (dh) by looking at three things specifically: 1) an analysis of dh collaborations as labs, 2) arguing for a hacker culture in dh, and 3) discussing the transformed practices of literary studies specifically. Especially the first point made me curious to read the book, as it is closely related to my own PhD research in which dh labs are one type of collaboration I am looking into. However, the book provides little news for either of the three topics. In this blog post I will look a bit at what Lane promises to do and what he ends up doing.
For my PhD research I will be using Galison’s concept of the “trading zone” to describe digital history projects where historians collaborate with people from other backgrounds. In his book, Image & Logic, Galison developed this concept to describe the development of the field of physics in the period of 1880s-1970s where physicists of the “image” tradition (taking photos to discover new elements) and physicists of the “logic” tradition (using statistics to discover new elements) ended up working together. What is of interest to me, besides his development of the “trading zone” concept, is that automatisation of work plays a key role in this development, and from the 1940s on the computer starts playing a prominent role, shaping the field of physics. What becomes apparent from reading this book is that the integration of the computer in physics was by no means a natural inclusion, but a process of debate and negotiation of what it meant to “do” physics and what kind of knowledge can be acquired using computers. In this blogpost I’ll briefly touch upon this debateSince Image & Logic is an 850 page book, I can in no way summarise this satisfactorily in a blogpost, but I will do my best., as described in Galison’s work, and consider parallels with the debates in digital humanities (dh). Assuming dh describes a transition to include computers in humanities workZaagsma, G. (2013). On Digital History. BMGN – Low Countries Historical Review, 128(4), 3. http://doi.org/10.18352/bmgn-lchr.9344, maybe we can describe this transition of physics as “digital physics“.Not to be confused with the field of physics that describes the universe in terms of information https://en.wikipedia.org/wiki/Digital_physics
References [ + ]
|1.||↑||Since Image & Logic is an 850 page book, I can in no way summarise this satisfactorily in a blogpost, but I will do my best.|
|2.||↑||Zaagsma, G. (2013). On Digital History. BMGN – Low Countries Historical Review, 128(4), 3. http://doi.org/10.18352/bmgn-lchr.9344|
|3.||↑||Not to be confused with the field of physics that describes the universe in terms of information https://en.wikipedia.org/wiki/Digital_physics|
In a previous blogpost, I introduced the project A Republic of Emails, where we created a dataset of the 30k Hillary Clinton Emails by scraping Wikileaks. Now that we have the data, we can start exploring with what I like to call the W-questions: What is the collection about? Where do described events take place? When did these events occur? Who are the actors involved? In this second blogpost, we will look at what the emails from the Hillary Clinton corpus are about. I will describe how we prepared the data to analyse a) the raw text, b) normalised text, and c) entities in the text (named entity recognition). Finally, we will look at a small subset of the emails using Voyant Tools. For all the steps I will point to the respective scripts on our GitHub so you can reproduce the project.
This year I will teach for the second time the Doing Digital History course for the History master at the University of Luxembourg. Just like last year, students will ask several W-questions. What is the collection about? Where do described events take place? When did these events occur? Who are the actors involved? In contrast with last year, where we had different collections per week, this year students will work with a single collection to experiment with throughout the course. In a series of blogposts I will describe the collection that the students will be exploring and the methods/tools that will be used to conduct close and distant reading. If you have feedback to further improve our ideas, please comment. If you wish to reproduce the project for your own courses, the blogposts should allow just that. As a reference to the historical Republic of Letters, I like to call this project A Republic of Emails.
The past six months I have been on parental leave to enjoy our son Felix (born 13 December 2015), and today I am finally back at the university. In these months I have seen a baby grow from not being able to do anything except for reflexes, to understanding objects around him, interacting with them, and manipulating them to do what he wants (although not yet always successfully). Watching him go through these stages of learning actually reminded me of the above gif captioned as how software developers see end users. When I saw that gif a while ago it gave me a laugh, but then I saw that my son had taken my bottle of water, and what he was doing was actually quite similar; licking the bottom, sucking on the side, holding it with his feet.
At some point he figured out what the top part is, and put that in his mouth, which left me to wonder how he figured it out. I left the cap on, so it’s not a simple trial-reward since he still cannot drink the water. Instead, I think there are two aspects of this learning process: visual feedback (seeing what side is supposed to be up), and learning by playing.
This week I’m at DHBenelux 2016, right here at the University of Luxembourg. I am part of the local organisation of the conference, and will give a tour of the DH Lab which launched its website www.dhlab.lu this week. Moreover, I will present my PhD research in a short paper, see below the abstract for my presentation. To learn more about DHBenelux, see my previous posts on DHBenelux 2016 submissions and DHBenelux submissions 2014-2016.