Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Mid-Term

I will finish writing my mid-term report for the German Research Foundation this afternoon. This and the presentation of last Monday lead me to formulate more clearly our achievements and our goals. The good thing is: I was not overwhelmed and depressed by the to-do list when I saw what the already-done list of the last 2 years contains.

I spent last winter and spring running from a meeting to another, trying to figure out how to organize cooperations with different institutions in order to present an edition that does not pretend to re-invent things that already do exist. I have the feeling that this is finally going to take a real shape especially with the Boeckh corpus. Finding a satisfying presentation of our Boeckh metadata collection combined to the edition of interesting Boeckh documents is the next big thing on the to-do list. I have been looking forward to reaching that point for over a year. I am totally excited about working on that. Plus, it is made easier by the fact that we don’t need anyone to move on: we have the data, we just have to think about how to organize and present them.

There will be further meetings, of course, on the three other works in progress (synchronizing with the Personendatenrepositorium, connecting text and image more finely and comparing our manuscript transcriptions to the first prints hosted by the Deutsches Textarchiv). What will make those easier than it was in the last winter during the first meetings is the fact that I now have a more precise idea of what has to be the profile of our edition. It has to be performant in both quantity an quality. In quantity because we want, in the end, to analyse intellectual networks on a representative scale and need at least a mid-sized corpus to do so (not a small one like it is now the case). This aspect was clear from the beginning. The question that was harder to answer was the one of the quality in detail, especially considering that it can be contradictory with the quantity: if you encode every other word, you need months for a single document and never get to the point where you have the whole convolute encoded.

Still: It seems that the text genetic part of our encoding is not only a scholarly requirement shared by all in the group, where nobody considers it an option not to render a correction, a missing letter, a hole on the page. Our two views of the transcription (diplomatic version and reading version) really is an originality of our edition and should be a strength of it. There is still some work to be done on the definition of text genesis and its implementation on the encoding and the visualization (encoding on the level of a single letter makes it difficult to generate a clean visualization). But these are interesting questions as well and I think we can define a clear frame within this winter for sure.

When we will have more text (hopefully in the spring), we will be able to think about what can be automatized in the encoding. And then we will all relax and encode with the toes, letting the computers do all the work…

 


OpenEdition suggests that you cite this post as follows:
Anne Baillot (October 25, 2012). Mid-Term. Digital Intellectuals. Retrieved December 8, 2024 from https://doi.org/10.58079/nmsg


Anne Baillot

I am a Professor of German Studies at Le Mans Université, currently Researcher in Residence at DARIAH-ERIC in Berlin. My areas of research include German literature, digital methods for Heritage and Humanities, textual scholarship and intellectual history.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.