Next week I will be in Montreal for the ADHO DH conference, where I will present a poster with some results from my PhD research. Below you can find the abstract, and below that the poster itself, designed by my wife Lindi. For those not able to come, follow the Twitter hashtag #dh2017, and if you’re able to come I hope to see you somewhere during a coffee break or at my poster presentation!
Next week I will be in Utrecht for the fourth DHBenelux conference. This year the conference will include pre-conference workshops, and I signed up for the workshop on tool criticism, a follow-up to the excellent workshop that was held in 2015 (see PDF report here). At the conference I will present a paper showcasing some results of my PhD research into digital history collaborations. Below you can find the abstract of the paper. For those not able to come, follow the hashtag #dhbenelux. And if you are able to come, see you next week!
Like all great debates in DH, the return of the “what is DH” debate started off with a tweet:
It’s 2017 and still nobody knows what Digital Humanities is.
— Ian Bogost (@ibogost) June 21, 2017
This is a recurrent question, and one might ask whether in 2017 it’s still a fair question. Indeed, in 2017 it is not so popular anymore to debate definitions of DH. As I wrote in my previous blogpost, I agree it is not always important, as I don’t think it is an important question when educating students about DH. On the other hand, one might ask whether this isn’t just evasive; we can’t define DH, so we deny the importance of that definition. In this blogpost, I will not provide a definitive answer to what is DH, but I will argue that is remains an important question for two reasons: practical and epistemological.
Recently a student approached me with extensive feedback on my course Doing Digital History of last year. The short summary was that he liked me as a teacher, that he liked the structure of the course, but that he disagreed with the learning objectives of the course. We eventually had a discussion about what teaching digital humanities (DH) should be about, and the up- and downsides of different approaches. In the end, his disagreements went down to three assumptions I made that lie at the core of my course. As many universities are developing courses in digital history or digital humanities, I thought it would be interesting to lay out my assumptions and his objections as a student. If you have any feedback on my assumptions, please put them in the comments!
This year marks the fourth annual DHBenelux conference, which cycles through the Netherlands, Belgium, and Luxembourg. This fourth instalment will be held in Utrecht (the Netherlands), and last week the review process was finished and authors were contacted about the decisions. This provides me the opportunity to write down an analysis of submissions to DHBenelux 2017. For previous years, see blogposts related to 2016 and the period 2014-2016. Below I will look at the submissions, authors, and keywords.
Next week I will be visiting Rome to join the Associazione per l’Informatica Umanistica e le Culture Digitali (AIUCD) conference which will be held from 26-28 January at Sapienza University. See the entire programme here. The topic of the conference is “Il telescopio inverso: big data e distant reading nelle discipline umanistiche”, and as a result Mark Hill and I have formed a panel on big data, distant reading, concept drift, and digital history. In this blogpost I’ll post the abstract of the panel, and my own abstract; if the full proceedings including the abstracts of the other panel members are online I’ll add it to the presentations page. We are excited to have brought together scholars working on concept detection, ambiguity, and methodology of history, so we hope we will get a very nice discussion going.
Today I received an email from the university library that as of today, we have to register with the library before we can download academic literature. The reason is that the Consortium Luxembourg wants to track usage statistics to determine the financial contributions from each Consortium member. The university librarians gave two solutions, either to use the university search engine, or to manually change the url to include the proxy information. Neither solution is particularly user friendly, but as luck would have it, the latter gives us the possibility to create a bookmarklet that gives you one-click access.
Using the bookmarklet
- Drag the below text “A-Z Access” to your bookmarks.
- Look up a paper that isn’t open access (even though it should be), such as this one of mine: http://dx.doi.org/10.1007/978-3-642-40501-3_46
- Click the A-Z Access bookmarklet in your bookmarks
- Login to your A-Z.lu online account (once you’re logged in this will be skipped automatically)
- You will be taken to the page where you can download the paper (if A-Z has access to it of course)
The bookmarklet works very simple, it looks at the hostname of the current window and adds the required proxy url bit. Many thanks to Redditor Untgradd who gave the solution to add the proxy after the TLD (the .com bit) rather than at the end of the entire url.
In his recently published book The Big Humanities: Digital Humanities/Digital Laboratories (2017, Routledge), Richard Lane promises to discuss the digital humanities (dh) by looking at three things specifically: 1) an analysis of dh collaborations as labs, 2) arguing for a hacker culture in dh, and 3) discussing the transformed practices of literary studies specifically. Especially the first point made me curious to read the book, as it is closely related to my own PhD research in which dh labs are one type of collaboration I am looking into. However, the book provides little news for either of the three topics. In this blog post I will look a bit at what Lane promises to do and what he ends up doing.
For my PhD research I will be using Galison’s concept of the “trading zone” to describe digital history projects where historians collaborate with people from other backgrounds. In his book, Image & Logic, Galison developed this concept to describe the development of the field of physics in the period of 1880s-1970s where physicists of the “image” tradition (taking photos to discover new elements) and physicists of the “logic” tradition (using statistics to discover new elements) ended up working together. What is of interest to me, besides his development of the “trading zone” concept, is that automatisation of work plays a key role in this development, and from the 1940s on the computer starts playing a prominent role, shaping the field of physics. What becomes apparent from reading this book is that the integration of the computer in physics was by no means a natural inclusion, but a process of debate and negotiation of what it meant to “do” physics and what kind of knowledge can be acquired using computers. In this blogpost I’ll briefly touch upon this debateSince Image & Logic is an 850 page book, I can in no way summarise this satisfactorily in a blogpost, but I will do my best., as described in Galison’s work, and consider parallels with the debates in digital humanities (dh). Assuming dh describes a transition to include computers in humanities workZaagsma, G. (2013). On Digital History. BMGN – Low Countries Historical Review, 128(4), 3. http://doi.org/10.18352/bmgn-lchr.9344, maybe we can describe this transition of physics as “digital physics“.Not to be confused with the field of physics that describes the universe in terms of information https://en.wikipedia.org/wiki/Digital_physics
References [ + ]
|1.||↑||Since Image & Logic is an 850 page book, I can in no way summarise this satisfactorily in a blogpost, but I will do my best.|
|2.||↑||Zaagsma, G. (2013). On Digital History. BMGN – Low Countries Historical Review, 128(4), 3. http://doi.org/10.18352/bmgn-lchr.9344|
|3.||↑||Not to be confused with the field of physics that describes the universe in terms of information https://en.wikipedia.org/wiki/Digital_physics|
In a previous blogpost, I introduced the project A Republic of Emails, where we created a dataset of the 30k Hillary Clinton Emails by scraping Wikileaks. Now that we have the data, we can start exploring with what I like to call the W-questions: What is the collection about? Where do described events take place? When did these events occur? Who are the actors involved? In this second blogpost, we will look at what the emails from the Hillary Clinton corpus are about. I will describe how we prepared the data to analyse a) the raw text, b) normalised text, and c) entities in the text (named entity recognition). Finally, we will look at a small subset of the emails using Voyant Tools. For all the steps I will point to the respective scripts on our GitHub so you can reproduce the project.