Of ghosts and metadata

While writing my paper for the Biography Conference and writing my paper for the Frankoromanistentag and trying to finish up the grant application for the Boeckh Encyklopädie and settling on the OCR issue, thus turning my intention to very different fields and objects, I have been wondering which of the research objects I am manipulating are real and alive and which are more speculative and less substantial, constructions of a ghostly nature if you want.

It was Markus Schnöpf who gave me the first hunch when we discussed the composition of the Berlin Digital Humanities Group and he underlined how important the museology fraction is. I was not really able to process all the implications of that consideration in that moment. I only thought: The hegemony of text sciences is a very narrow point of view, come to think about it. I have been so infatuated in texts for so long that I forgot that other items can be placed on the same level as texts. Especially items that have just the same archival function: the function to keep things alive through memory. But are archives alive, or are they only ghostly versions of what they used to be, fading away, unsubstantial, flickering flames? Are stones or papyri more alive than texts? Do we need the original materiality, the manuscript itself, almost as an object of adoration, to keep things like texts alive – knowing perfectly well that they decompose anyway, never stay the same, never are the same than they just have been a second before?

Then the ghosts turned up again in another, completely different discussion. While Laurent was (yet again) trying to convince me to have the OCRs done (don’t worry, Laurent, I am convinced now), he said to me that images like scans without searchable (even badly OCRed) text were invisible, nonexistent as digital research items. Here are my ghosts!, I thought. Unsubstantial images that are connected to nothing. What makes digital ghosts is their lack of connection, of traceability, their absence of metadata. Metadata are existence factors.

I understand and accept the crucial role metadata play in the digital world. Do I really have a choice about that? Whomever refuses to admit that metadata play such a central role inevitably gets lost in the feeling that the web is unstructured, that you can as much find anything as you can find nothing, that it is all a big fascinating whirl you cannot have any kind of grip on. A dance of shapeless ghosts. In that sense, I don’t think that any form of digital science can do without a really deep epistemological thinking on metadata and their structure. In other words: it is essential that we, the researchers, take part in the shaping of the metadata that are relevant for our field. And that we do it now, because in twenty years, we will only have our eyes to cry because others have done it for us in a way that does not fit our research objects.

On the other hand, it can and should not all be about the metadata. If I consider things not from a structural point of view, but from the point of view of the inevitably super-specialized research that makes me want to go to work every day, metadata are dead. Real life is elsewhere. Real life is when meaning emerges, and meaning can emerge from anything, but most likely it emerges from something. From a manuscript, from a discussion, from a fight for a scholarship that is able to see ghosts and keep their trace – whether alive or not.

 

Word of the Day: Workflow

I started noticing the recurrence of the word “worklflow” about a year ago. This is also the time that I had begun to get what metadata are (I come from a galaxy far, far away, as you see). At first, I imagined a nice little boat gently floating  on a river: me and my team along the workflow. I took me a while to accept that the problems of workflow involve thinking in terms of non-linearity and arrhythmia.

In the traditional way my discipline works, the most you have to coordinate is topics for papers when you plan a conference or exchanging corrections between the author and the editor for a book. The idea to work in direct connection with institutions and not just shake hands and put their names as “cooperation partners” completely changed the way I do my job. Half of it is, in fact, another job.

At the time being, our cooperation extends to 5 institutions we are working out a workflow with:

– the easiest workflow to implement so far – and also the only institution we already have a signed Memorandum of Understanding with – was with the Stiftung Stadtmuseum Berlin. We received their digitizations and used them on our edition; anyone wanting to re-use them has to ask for their permission; it is all settled and implemented on the edition page (which I promise you will get to see at the beginning of April)

– the Staatsbibliothek zu Berlin-PK: the simple idea to implement in our digital edition a link to the digitizations hosted in the StaBi’s digital library involves a series of work steps before it comes to actually realizing this linking. We are now working on that specific workflow, which supposes a mutual effort in coordinating the granularity of information we put in the metadata (which both the StaBi and us want to use). Slowly but surely, I am getting acquainted with librarian data formats;

– Some of the texts we edit are major works of German Romanticism, so that the comparison between the manuscript and the first print is a major scholarly issue. In order to realize that, we are working on a workflow with the DTA (Deutsches Textarchiv, at the Berlin-Brandenburgische Akademie der Wissenschaften). The ball is in our camp right now, we have to realize a document that will be comparable to their document. It is not so easy since it does not just concern local amendments, but for instance whole paragraphs missing in one version – and the ending of the text is completely different, too;

– For the letters, I have emphasized often enough how important the index of persons is to me, especially with the aim of reconstructing intellectual networks. We are working out – guess what: yes, a workflow! – with the PDR (Personendatenrepositorium, also Berlin-Brandenburgische Akademie der Wissenschaften) in order to retrieve some information from their database and for them to be able to use some of the specific information we provide through the unedited manuscripts;

– Within the Boeckh project, the cooperation with the University Library is a key element. Like for the StaBi, we have to find a way to mutually enrich our metadata. Unlike with the Staatsbibliothek though, it is not a limited number of folios we have as a common ground but thousands of books to be digitized. The difference of scale involves – well, a different kind of workflow. I am a little bit overwhelmed by the figures on that part of the project, I must admit. But we still have some time to figure it all out properly.

I hope to come to concrete cooperations like those also with the Archive of the Berlin-Brandenburgische Akademie der Wissenschaften and the University of Chicago Library / Special Collections. And hopefully others too. At some point.

One thing that I can say for sure is that I am very happy and proud of the way we use XML. The fact that we, the editorial team, the people who transcribe, comment, write papers on the literary value of those texts, encode them directly in XML/TEI makes it easier to put up all those cooperations. In fact, it makes it simply possible to work out direct workflows. That is one decision I will never thank Laurent enough for.

My biggest worry is still, of course, the long time archiving. On that point too, I think working in XML/TEI will be a major strength of the project. Still, I get more and more the question from the cooperation partners: “and how long does the project funding still last?”. June 2015 is coming closer, especially considering the time it will take to implement those workflows. But never would I have dreamed to be able to go that far in these 5 years, and I keep my fingers crossed that I will be given to pursue this work after this funding phase.

What’s in a Boeckh?

I have already mentioned August Boeckh several times in the past (here, here and here for instance), but never presented the importance of the Boeckh part in the big picture of my project. Since we will have a team meeting in a couple of days, this seems to be a good time for a recapitulation on the workflow and the objectives.

First, let me tell you how I came to working on August Boeckh! When I developed my project on Berlin intellectuals around 1800, I thought that there was nothing new to find on the “big” intellectuals in the archives, nothing unknown to edit and possibly not so many new, clever ways of working on printed material without repeating what had already been said. I would still argue that when it comes to Fichte, Hegel, Humboldt, which are personae I avoid working on frontally. But at some point, I had to re-ask myself what is a “big” and what is a “small” intellectual, since there are obviously some in-between categories, some “mediumly important” intellectuals whose life and work were not as much over-explored as I had first thought. August Boeckh and his papers happened to be an almost completely unexplored territory.

August Boeckh belongs to the generation of young scholars that built up the Berlin University. Wilhelm von Humboldt had by far not done all the work that was necessary for the University to be a stable academic environment when he left his ministry. It is in the years between 1811 and 1820, the so-called “Gründerjahre”, that the relations between the faculties were clarified (the first status had planned an explicit hierarchy that put the Philosophical Faculty in the last place), the way to react to structural aporias like a conflict between University Senate and University Rector was settled, a University Library was organized, etc., etc. August Boeckh came to Berlin in 1811 and stayed there until his death in the 1860s. He served the University and the Academy of Science, was in contact with all major Academies in Europe, started the first Academy Project (a project still running by the way) on Greek inscriptions. He was Professor of Philology, but his activities inside the University extended to being officially appointed at giving all official speeches (first in Latin, then in German) and to leading the Philological Seminary, which he founded and directed until his death. He was a central part of the intellectual puzzle I am trying to reconstruct, at the intersection of politics (see the contents of his speeches), science policy (see his relationships with Alexander von Humboldt), University structure (he was dean and rector several times) and training the elite that made 19th century Germany.

Needless to say, he was a major figure in Classical Philology, so his printed work is solidly known, it was printed and re-printed. But there remains much to learn about his work at the University and at the Academy of Sciences. This is the dissertation topic I suggested to Sabine Seifert when I hired her to work in the group and that she has been exploring for a year and half now. Before I had that funding, I had been working with three other persons from the Humboldt University trying to figure out where to begin, how to proceed: Thomas Poiss, a Hellenist who had already published on that topic, Colin King, a Philosopher working for the August-Boeckh Antikezentrum and Christiane Hackel, a Historian who had worked on Droysen’s reception of Boeckh in her Masters thesis.

We are now working in three directions:

1) August Boeckh papers: in cooperation with the Berlin archives who hold manuscripts by Boeckh, we are putting together an overview of what there is left. The decision to begin with the Berlin manuscripts is a pragmatical one; of course, it should extend to all manuscripts in a further step. As much as I think it will be doable on a national scale, I am afraid it will become more complicated when we include all lecture notes from his classes, since he gave a lot of them, with a lot of public, and this kind of manuscripts are mostly to be found in the USA. The core of the workflow on this part of the project is really the coordination with the institutions. The idea is to work in XML in order to make the data exportable to the kalliope database and likely to be enriched in both directions (us getting info from kalliope, kalliope getting info from us). We have a whole bunch of datasets now (thanks to the funding of the De Gruyter Stiftung that allowed us to hire Julia Doborosky to do most of that work – I paid another big part of it with my Caroline von Humboldt award money, it seemed only fair…).

2) August Boeckh book collection: I explained here how the University Library of the Berlin University was born and how long it took before it became part of the University overall concept. Boeckh knew very well how important the University Library is for the institution. He donated all his books to the University Library after his death. Those books (very few of them were kept by the family) were then added to the existing stock without being systematically recorded. Some have an ex-libris, some do not. Some were listed by Boeckh on the handwritten catalogue he also donated, half of them were not. Boeckh’s list contains about 6000 items. Until now, we moved on two fronts in parallel: one is an XML/TEI information on each entry of Boeckh’s catalogue (identifiying the book, the edition, the volume, and then trying to find the corresponding volume in the University Library). The other is a less exhaustive list of books that can be identified on the basis of the ex-libris catalogue. We will have to merge these two at some point. The cooperation with the University Library Laurent and I have been working out a couple of weeks ago consists like in point 1) in putting up a workflow that allows mutual data enrichment. The final aim is a digitization program that will allow to virtually reconstruct Boeckh’s library (with the big advantage that his own books contain his personal remarks and notes in the margins…). One of the inputs I am expecting from this part of the project is insight in the mechanisms of reception and genesis: Which works of Classical Literature did Boeckh use? How differently could he refer to a source depending on the output he was planning to reach?  Combined with the lecture notes I evoked in point 1), one question that fascinates me is to what extent he would use his sources differently in his lectures and in his published works, where the censorship mechanisms, the public are not the same.

Which takes us to:

3) the edition of Boeck’s Enzyklopädie oder Methodologie der Philologischen Wissenschaften: this is a class Boeckh gave 26 times within 56 years, always based on the same handwritten set of notes. The first draft dates back to 1809 and Boeckh kept enriching it afterwards. The only existing edition of it was realized in the late 19th century by a student of his who wanted to render the totality of Boeckh’s thought, so that there is no trace  of the genesis of the work to be found there. The original manuscript of that very set of notes is held by the Archive of the Berlin Academy of Sciences, and a first transcription reconstructing the different moments of the genesis of the work has been realized in the 1980s, without being finished or published. What we want to do is start from there to realize a hybrid edition (digital in XML/TEI + paper) offering the possibility to collate (is that the right word for “kollationieren”?) the different text versions. We are working on the funding application right now, which is why we will be meeting on tuesday…

And as for myself, one of the very interesting questions I am working on for this summer is in which form or how to include all this material in the exist-database and present it together with the digital edition.

If you want to read the most recent publications on this:

* Romy Werther published an edition of the correspondence between August Boeckh and Alexander von Humboldt.

* Thomas Poiss published a paper dealing among other things with Boeckh’s speeches in my volume Netzwerke des Wissens. Das intellektuelle Berlin um 1800

… and contact me if you want more!

 

Patching Part Two: Script Types

Here are 2 other posts that belong to the prehistory of my blogging:

https://upload.wikimedia.org/wikipedia/commons/6/65/Lessing_Kleist-Brief.jpg

#1: Latin Salsa

Dated Sept. 26th, 2011

As you can see in this letter written by Lessing around 1750, it requires a particular training to just be able to read the kind of script used in Germany in the 18th-19th century (this letter is indeed a pretty nice and clean example – it usually gets far worse)

The texts we publish in our digital edition are mostly – but not exclusively – written in this kind of script, called german old script (“alte deutsche Schrift” in German, also known as “Kurrentschrift”). The problem we were confronted with was that of a change in the script occurring very often, especially between latin and german old script. The importance of those changes of script is obvious in terms of the materiality of the document, but also as a way to describe the literary practice of the different authors. In traditional German editions of texts of that period, it is standard to render the script differences optically, most of the time by changing font or font size.

But we couldn’t find the german old script in the ISO standards (which we use in our encoding). So we (that is in this case Laurent) bravely sent a request to have an ISO number attributed to it.

Here is the answer we received from the ISO committee:

“After submitting your email to the experts, it seems to me that Kurrent is just old German handwriting which uses the Latin script.”

Of course, the idea that this would be “just another” Latin script is irritating for us who spend hours trying to figure out what these characters are. But the real problem is that it is precisely the difference from latin script we want to make noticeable.

So the little salsa with the ISO committee might last a little bit, since we certainly will try to have Kurrent considered, if not a non-latin, at least a defined sub-class of latin script.

 

#2: Paso Doble

Dated Sept. 30th, 2011

I might have to revise my judgement on wikipedia. I have always considered it only helpful when you already have a substantial amount of background knowledge on the topic you are looking into. In the follow-up of my ISO related worries, I had to admit that I gained some interesting insight through wikipedia.

Laurent told me about the kind of Paso Doble ISO and Unicode have been dancing for some years before starting working hand in hand. After he had explained to me the historical meaning of the ISO Latin script, I was somewhat more inclined to consider German Old script/Kurrent as related to it indeed, Latin being a very wide family of scripts. So much for the soothing of my hermeneutic irritation.

This didn’t help much, though, concerning our encoding: we still had to differentiate between two types of handwritings. As Laurent explained to me, ISO standards were primarily conceived for printed characters, not for handwritten ones. So it only seemed logical to use the ISO standard for the Fraktur script, in which the texts written in German Old Script were actually printed in the 18th-19th century, to mark the difference from the non-German latin script.

But the story doesn’t end there. The first problem is that Fraktur was not only used as a printed version of German Old script, but also as a printed version of Sütterlin, a very much simplified version of the German Old script used mostly in the 20th century. That would be in terms of ISO 15924: Latf, 217. But it is pretty insatisfactory to stay at such a general level that basically German Old Script/Kurrent and Sütterlin would be considered the same. The deeper problem is probably that working with handwritten material is not really compatible with a system based on the differentiation between scripts and fonts (what is a script and what is “only” a font).

And why does (according to wikipedia) Fraktur have a unicode number attributed to it and neither Kurrent nor Sütterlin do?

For those who are deeply bored by these considerations and would like to actually get more than far-fetched metaphors, I recommend watching Strictly Ballroom.

 

 

Good news, everyone!

On November, 17th, something great happened. And by that, I do not mean my paper at the Boeckh conference, which left mostly some unease in the room – due, I guess, to the fundamentals it contained: interdisciplinary work, an edition that selects its elements after precise scientific questions and not after the origin of the documents, and digital on top of it all. Once I had past through the surprise of my flop, I found it an interesting (non-)reaction.

No, the really brilliant moment of that day was the meeting between Jutta Weber, Laurent and me in the morning. It was a peculiar situation to sit beneath two great librarian minds who were talking preservation structures – they lost me in technicalities at some point, I must admit. I was fascinated by the vision they developed within this hour and half.

To me, it was a dream getting a little bit closer to coming true. Those of you who know me (and especially those of you who experienced me at the DFG meeting for Emmy Noether junior research group leaders in July 2011 or within the circle of Berlin der Begegnung in February 2011, which were the two occasions where I talked about it as a project particularly dear to my heart) will probably remember that my ideal aim as a scholar is to bring people closer to archives. To give people (more people than actually do know about it as is) a feeling for what an archive is: the place, the reality of the documents, the carnal relationship to the piece of paper. And I am deeply convinced, as paradoxical as it may seem at first, that online editions displaying digitizations are THE way to reach that aim. People who see manuscripts online are more likely to want to see them for real. To wonder where they are, how they landed there, with what other documents they are surrounded, who is interested in them. To get in touch with our paper history.

So the big plan we started to work on on Thursday morning is a structure where we would synchronize our metadata with those of the Staatsbibliothek (and Kalliope at large), in both directions. The Library would be the warrant for archiving a mass of information with an everlasting valid identification number that would make it possible to have for each document a totally stable reference. The users would be able to benefit from the most up to date input given by the researchers. Isn’t it just the best starting point ever??

I will keep you posted on the developments. Actually, there probably is more to say about it already, but, hey, I”m already in my pajamas!