DH Failures vs Findings

Recent discussions in digital humanities have drawn attention to “failure”. Projects can fail to deliver a tool or fail to innovate practices. But what practices are emphasised by speaking of “failure”, and for whom is a certain result a failure? In this post, I argue that recent discussions of failure seem to take DH as software development rather than research, shaping the discussion of what DH should achieve and whether other results are thereby failures.

Failure here may mean death below. Safety first. By Allan Nase (1936) http://www.loc.gov/pictures/item/98518429/

According to Jasmine Kirby, we do not talk enough about failure in digital humanities. In her recent article, exploring the history of the Sophie 2.0 project from the Institute for the Future of the Book, she discusses how this project ultimately failed.[1]Kirby, J. S. (2019). How NOT to create a digital media scholarship platform: the history of the Sophie 2.0 project. IASSIST Quarterly, 42(4), https://doi.org/10.29173/iq926 She shows that while the project members point to “too much ambition and lack of funding” (with >$1M in funding nonetheless), more realistic reasons were the lack of a coherent idea who the target user group was, how to reach and serve users, and basic skills in software development. I definitely recommend the article. However, the conclusions got me thinking, leading to this blogpost, because Kirby in my view associates DH too much with software development, and too little with research.

DH as software development

I got especially curious by the conclusion of the chapter, where Kirby argues that

Unfortunately, they did not create a product that worked in its present time. It is not the job of librarians and digital humanists to use software we hope will work because it aligns with values we find important, it is our job to recommend and contribute to digital tools that won’t eat our users’ homework.

Without arguing the definition of what precisely DH is and what “our job” is, my problem with this quote is that it conflates research with software development. The increased attention towards failure similarly can be traced to Silicon Valley with it’s mantra of “fail fast”, but failing and iterating is not the same as research.[2]Hall, E. (2013) How the ‘Failure’ Culture is Killing Innovation. Wired https://www.wired.com/2013/09/why-do-research-when-you-can-fail-fast-pivot-and-act-out-other-popular-startup-cliches/

DH as research

This also reminded me of several case studies in my research, which failed to develop the tool they set out to create. Yet consider the following quote from the PI of the project

Ultimately a production-version does not have to come out of [the project], that is not the thing. This is more a technology project in which the know-how that is developed, also by the companies that continue to work towards a productive system. That they can use parts in a new product

History Professor (personal interview, translated from Dutch)

When a DH project as research does not produce a workable tool, is that a failure or is it a finding that intellectually explores the limitations of the initial ideas, produces more knowledge about the research problem, and know-how for further investigations? “Failure” implies something that could or should have been prevented, a mistake, something that should have been otherwise. But if DH is research or even experimentation, then the tool itself is not even that interesting. The lack of a usable tool in itself is insufficient to constitute “failure”. Consider also this recent tweet from Lorna Hughes:

This is not to deny that DH projects can fail, research can be conducted in the wrong way, so there can be a failure of process. Especially with respect to basic software development strategies many projects do fail, as shown by Kirby, since these issues hardly produce new knowledge (although that is a caveat of interdisciplinary work, that knowledge common to another area is new to one’s own). But if a project generates knowledge but not a tool, that in itself is not a failure, it is research producing findings.

What is failure?

I applaud the increasing attention towards learning from mistakes and unsuccessful pursuits. Especially Quinn Dombrowski has produced very interesting discussions in this area.[3]Dombrowski, Q. (2014). What Ever Happened to Project Bamboo? Literary and Linguistic Computing, fqu026-. https://doi.org/10.1093/llc/fqu026

Dombrowski, Q. (2019) Towards a Taxonomy of Failure. http://quinndombrowski.com/blog/2019/01/30/towards-taxonomy-failure
However, to fully understand failure in digital humanities, this includes a discussion of the purpose of digital humanities. In this post I have contrasted DH as software development or research, and it is probably a bit of both.[4]Galey, A., & Ruecker, S. (2010). How a prototype argues. Literary and Linguistic Computing, 25(4), 405–424. https://doi.org/10.1093/llc/fqq021 Yet this interdisciplinary mingling of practices especially necessitates reflections on what DH projects should produce. While scholars might promise a tool to a funder, this does not necessarily mean their primary objective is the development of a tool, as I have shown in my research. Whether a project has failed is very much dependent on who you ask and when. In conclusion, the central questions are what counts as failure, and to whom? And by elevating failure as a means of learning, what practices are we emphasising within digital humanities?

References   [ + ]

1. Kirby, J. S. (2019). How NOT to create a digital media scholarship platform: the history of the Sophie 2.0 project. IASSIST Quarterly, 42(4), https://doi.org/10.29173/iq926
2. Hall, E. (2013) How the ‘Failure’ Culture is Killing Innovation. Wired https://www.wired.com/2013/09/why-do-research-when-you-can-fail-fast-pivot-and-act-out-other-popular-startup-cliches/
3. Dombrowski, Q. (2014). What Ever Happened to Project Bamboo? Literary and Linguistic Computing, fqu026-. https://doi.org/10.1093/llc/fqu026

Dombrowski, Q. (2019) Towards a Taxonomy of Failure. http://quinndombrowski.com/blog/2019/01/30/towards-taxonomy-failure
4. Galey, A., & Ruecker, S. (2010). How a prototype argues. Literary and Linguistic Computing, 25(4), 405–424. https://doi.org/10.1093/llc/fqq021

3 thoughts on “DH Failures vs Findings

  1. Thanks for writing up this post, Max. You make an excellent point (and one I was thinking about when I read Jasmine Kirby’s thoughtful piece). It is a complicated question, in part because funders and researchers don’t always agree on whether a project is “research” or if it is software development. In some cases, a project that might be a fruitful candidate for experimental research can only be funded if couched as “software development” (with an identified audience of prospective users). I wish this wasn’t the case, but in reality, researchers sometimes have to twist their projects to meet funder expectations.

    As you know, we fund this kind of work at my agency. In our most recent grant guidelines, we’ve really been emphasizing the word “experimentation” as an example of a kind of project we would fund. My intention is to encourage projects that are truly experimenting with new research methods and techniques (and of course, documenting those experiments so that others can learn). At the same time, we can also fund projects that are truly “software development” where the intention is to build user-friendly tools that a wider audience can use. Such a project would need to provide user support, training, and have some kind of long-term sustainability plan. (That’s a tall order, I know.) PIs need to tell us (and our peer reviewers) which of these two camps they are in at the time of application.

    When the Mellon RIT program was funding Sophie, I’m not sure if their perspective was they were funding experimental research or software development? (Based on some conversations I had with the RIT team back in the day, I suspect it was the latter, but I can’t say for sure.) So I suspect that one’s notion of “failure” depends somewhat on the original intention of the project. In either case, though, I agree that research is a learning process and projects build on their predecessors. So writing about past projects, as Jasmine Kirby has done, is a good and useful thing for the rest of the field.

    1. Thanks for the elaborate comment Brett! Your comment shows that in analysing the success or failure of a project, the role of the funder deserves scrutiny. In my own research, I have spoken with several researchers who explained how they reshaped their original ideas to fit with existing funding schemes, and further reshaping during the project in experiencing the feasibility of promises made. It would be very interesting to consider that process of continuous reshaping from both the funders’ as the researchers’ perspectives in discussing whether a project is a “failure” or not, and how that affects future grants. I definitely agree with your conclusion that learning about past experiences is good for the field, and look forward to more of such discussions.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.