Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Workshop “Digital Humanities and Machine Learning”

Last week, from the 19th to the 23rd of July 2021, we hosted a Digital Humanities and Machine Learning workshop at TU Berlin. Yes, it was AT the uni, in-person. It was originally planned to take place last year in April but we had to postpone it due to COVID. 

The workshop was the last one in a series of workshops that were organized in the MALT3 project (BMBF). Since the project had ended before the workshop could take place, the Berlin Institute for the Foundations of Learning and Data (BIFOLD), which I also joined in the meantime, generously offered to host the workshop.

Many of the workshops that were organized in this context adopted the format “Machine Learning applied to $something” where $something is a specific field of science. For example, one workshop was dedicated to Machine Learning for Quantum Chemistry, another one to Machine Learning for Medical Imaging. The main challenge was always to find a level ground for all participants from different fields (computer science students, researchers from the fields such as physicists, radiologists, and even participants from the industry). For the final workshop, I suggested that we challenge ourselves even more by organising a workshop on ML in the Humanities. Physicists and medical scientists are usually much more comfortable to use statistical methods than humanists are. When we invited students from the humanities, students from computer science and researchers in the field of Digital Humanities, we expected that finding a level-ground would be even harder than in the other workshops.

We accepted eleven participants to the workshop and we were very lucky that actually six of them had at least some background in programming or machine learning, meaning that neither the ‘humanities’ group nor the ‘cs’ group was too small a minority. 

Our main goals were (1) that participants from each group would learn something specific to the other group and (2) that participants gain a general understanding of how interdisciplinary work can be fruitful in a DH project.

On the first day, the bridge between DH and ML was built by an “Intro to DH” talk by Anne Baillot and an “Intro to ML” talk by Klaus-Robert Müller. Both talks were synchronized to discuss the intersection of DH and ML but from each speaker’s individual perspective.

Klaus-Robert Müller strongly encouraged students to have the confidence to ask basic questions when working with scholars from other disciplines. Scholars don’t always know which concepts one is not familiar with, especially they can’t expect one to not know the basic things that they learned as undergrads. Also, terms often have a different meaning in different disciplines which was also a central point in the talk by Anne Baillot: She stressed out that even within the DH community not everybody seems to have the same definition of what a model is and obviously, this term has another meaning within machine learning. Anne Baillot also pointed out that the understanding of what ‘a large data set’ is, differs a lot between ML and DH. 

On the second day, there was an introductory session on the mathematical concepts of machine learning by Thomas Schnake. This session would typically be one where participants are intimidated or zone out but the slow and careful way Thomas presented and interacted with the participants did not result in anything like that. I was just blown away by the amount of questions, discussion and all sorts of interaction that came from the audience, and not only from those with a background in CS. Now at the latest I realised that we especially benefited from the participants being just so happy to again be at an in-person event. It really seemed like we all had missed such an in situ experience.

Apart from the introductory sessions, the main focus of the workshop was on projects. Some invited  speakers presented their DH projects, and the participants worked on DH group projects during the week. Usually, the days were structured in such a way that the invited talks took place in the morning, and the groups would work on their projects in the afternoon.

The projects presented by the invited speakers ranged from very large and advanced projects, such as the heraldry project presented by Torsten Hiltmann or the project on image similarity of wood block illustrations in early prints by Matteo Valleriani, Jochen Büttner and Oliver Eberle (the Sphaera project). Also, early-stage projects were presented, such as Named Entity Recognition on historical Persian astronomy books by Zahra Salmani, Razieh-Sadat Mousavi and me – a project that has just started about two months ago. Or the project by Hassan el-Hajj on Node Embeddings in Early Modern Edition Networks. By presenting projects in an early stage we wanted to, again, reduce intimidation and showcase what the projects the participants worked on during the workshop and the speakers’ projects have in common.

So what were the participants’ projects, after all?

The idea of the projects was that students from both groups could work together and help each other with their respective domain knowledge. Also, they were encouraged to think about all steps in the life cycle of a DH project from corpus construction, digitization, annotation, to model training, interpretation and explorative application, to finally different publication strategies, like blog posts, web applications, jupyter notebooks or presentations. 

Stylometry

The stylometry project was mainly based on the data that is also used in the authorship attribution tutorial on the Programming Historian. However, I decided to provide a slightly different skeleton for the code, one that has an interface similar to the methods of sklearn:

class BurrowsDelta:

    

    def __init__(self, num_words=500):

        pass

    

    def fit(self, X, y):

        pass

        

    def predict(self, X):

        pass

The group experimented with different chunking methods of the documents and different vectorizing approaches to representing strings as input for Burrows’. It was eye-opening again for me to see that students with a fresh view on the projects would come up with very different solutions. When we discussed string representations for Burrows’ or other NLP models, what I had in mind were things like Bag-of-Words or BPE but one student suggested to create images of text like “printing” and use the flattened, grayscaled pixel-space representation as vectorial representation.

Image similarity

 

Figure 1: Creating a fingerprint of an image with a simple hashing function

In the image similarity project, the participants worked on a smaller subset of illustrations from the Sphaera project to explore differences in similarity measures. They compared two traditional computer vision methods (image hashing and keypoint extraction) with a machine learning approach. The latter was based on clustering of the representations of the hidden layers of the input image. Each image was passed forward through a pre-trained vgg16 network up until a certain layer. The representation of that layer was flattened and used as a vectorized representation of the input image. All images were then clustered with the k-means algorithm. Two out of the three participants of this group were students with a background in neuroscience and they were particularly interested in the comparison between image processing of the network compared with image processing in human brains. They experimented a lot with choosing different hidden layers and comparing the results with the hypothesis that earlier layers would show stronger similarities of key points or local features whereas later layers would detect similarities in broader and more abstract ways. 

DH abstracts

 

Figure 2: Relative frequency of most common keywords as a multivariate time series.

The DH abstracts group worked on the recently released Index of Digital Humanities Conferences

which consists of metadata of the works (papers, posters, panels, etc.) presented at various DH conferences since the 1960s (such as the title, the authors the year and the venue, but also keywords and topics of the work). Initially, I suggested to mainly look at the counts of keywords or topics of the works per year as a multivariate time series and use them to predict what might be trending in the upcoming years. The students, however, worked with Anne Baillot on the idea of predicting the topics, given the title of a work. The main challenges here were that some labels (such as English or Visualization) occur much more often than others and that each work might have multiple topics assigned. 

One other playful idea arose during the discussion: generating a title of a work, given a set of keywords. We are currently experimenting whether we would like to use something similar for the DH 2023 conference here.

Finally, when we discussed the results of the group on the last day of the workshop, I asked whether the project helped the team to better understand what DH is, by spending a week looking at so many different DH projects from different areas. And whether there might have been situations where they looked at a work title and realised “oh wow that would also be a DH project??”. The team answered that no, they didn’t have such enlightenment when looking at work titles but instead they noted that they often had to look up the meaning of certain words in the title. Those would either be a very specific term from the humanities or a very specific technical term. Therefore, the next step they would have gone for would have been to categorise topics and keywords into the two categories (humanities or technical) and look at the statistics of combinations: which technical term and humanities term is a very rare and surprising combination?

 

One topic that I didn’t expect to come up so quickly during the discussions was ethical considerations of machine learning and biases that arise in different stages of a DH project. From the first day this came up in almost every discussion so that I often had to ask participants to hold their thoughts until Thursday. That was the day we had dedicated to Bias in ML for DH. Stephanie Brandl, Anne Baillot and I had given a workshop on this topic already at the DHd 2020 and it somewhat became the unofficial conference theme as it was discussed (besides our workshop) in the keynote by Julia Flanders in another panel discussion and also picked up in the closing talk by Christof Schöch. I think that it is a very positive development that in earlier workshops of the MALT project the participants weren’t really interested in ethical considerations of ML, it then became a hot topic amongst researchers (like in DHd 2020) and is now something that students actively request to discuss in their courses.

On Thursday, we gave an updated version of that DHd workshop, with Stephanie Brandl talking about bias in NLP and bias mitigation in Word Embeddings and me talking about Bias in DH more generally. 

At the end of the workshop it didn’t really feel like an end, the teams all wanted to continue working on their projects, there were students planning to apply for research assistant positions at the labs of the invited speakers and, of course, bachelor and master theses in the beginnings.

The workshop website can be found here: https://workshop-dh.ml.tu-berlin.de/

The Course material resides in a Github organisation: https://github.com/workshop-dh-ml 

With the presentation slides in an individual repository: https://github.com/workshop-dh-ml/talks 

And the projects including the students’ contributions in individual repositories:


OpenEdition suggests that you cite this post as follows:
davidlassner (July 30, 2021). Workshop “Digital Humanities and Machine Learning” Digital Intellectuals. Retrieved September 17, 2024 from https://doi.org/10.58079/nmz6


You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.