Writing an ODD for the EHRI Online Editions — Preparatory Work
A little context
The source: EHRI Online Editions
The European Holocaust Research Infrastructure (EHRI) is a transnational organization which promotes, among other things, collaboration on Holocaust research and easy access to scattered sources. In 2018, they published their first digital edition: BeGrenzte Flucht. Since then, they have worked on several other editions and created the EHRI Online Editions as an online service.
An EHRI digital edition consists of a collection of documents with the purpose of casting a light on a certain aspect of the Holocaust. As of May 2023, four digital editions were published online:
- BeGrenzte Flucht (“Limited Escape”) (2018): After the annexation of Austria in 1938, Czechoslovakia became one of the most important countries of exile for Austrian Jews. It applied an increasingly restrictive refugee policy, which resulted in the blocking of the Austrian-Czechoslovakian border for Austrian Jews. The edition is composed of official documents, press articles, personal accounts, and correspondences relating this event.
- Early Holocaust Testimony (2020): Accounts of the persecution of Jews from the Nazi takeover of power in Germany (1933) to the Eichmann Trial (1961).
- Diplomatic Reports (2021): Reports on the persecution and murder of European Jews during the Second World War, written by the diplomatic staff of Denmark, Italy, Japan, and the United States. Reports from other countries will eventually be added.
- Von Wien ins Nirgendwo: Die Nisko-Deportationen 1939 (“From Vienna to Nowhere: the 1939 Nisko Deportations”) (2023): From October 1939, the Central Office for Jewish Emigration deported thousands of Jews to Nisko am San (Poland). The majority of them ended up in the Soviet Union, where their traces were lost. The documents come from various countries and are testimonies of both victims and witnesses.
The goal: One Document Does-it-all
The Text Encoding Initiative (TEI) is now widely implemented as a standard to encode textual documents in very fine detail. However, as the TEI Guidelines are very extensive, the encoding of a TEI-XML digital edition project needs to be thoroughly documented.
The ODD—which stands for “One Document Does-it-all”—is a TEI-XML file that documents and formalizes recommendations for the encoding by establishing specifications. Then, the ODD can be transformed into a RelaxNG
(.rng
) schema which can be applied to the corpus to ensure that the encoding specifications are met. The author of the ODD is responsible for its content and use of TEI elements.
There are three main ways of creating an ODD:
- Creating a model with the TEI Consortium’s web application ROMA.
- Using the “ODDbyexample” XSLT stylesheet.
- Writing it from scratch.
For interoperability purposes, the ODD and schema for the project can be shared online, in a GitHub repository for example. This documentation can then be reused for another edition, or even for a different project. In the case of EHRI Online Editions, the goal is to write an exhaustive ODD with schema specifications that can be applied to new collections in order to ensure coherence and consistency throughout the digital editions.
Step-by-step: preparatory work
Parsing the corpus
When I started my internship, four editions had already been published online, so I had to adapt to a quite large pre-existing corpus. Therefore, the first step to writing the “perfect” ODD was to get acquainted with the encoding practices of the various editors thus far. Because the corpus is composed of over 300 files at the moment, parsing the files manually was not an option, so I decided to work on Python scripts.
Retrieving data from the EHRI files: the extraction script
To methodically analyze the corpus, I first wrote a script to extract the elements used by the EHRI files. I expected the script to:
- Find all the element tags in the files.
- Get the attributes associated with each element, as well as the attributes’ values.
- Count every occurrence for each element and attribute.
- Order the information in a
CSV
output file.
However, I encountered some problems with the results I got. Even though the script retrieved everything I expected, the results were returned in a form that was hardly exploitable:
With the help of my supervisor Floriane, the results were rearranged in a more readable way and became the basis of the overview table I created.
Verifying information: the searching scripts
For the overview table to be complete, I wrote several scripts to locate specific information in the files, and in some cases, I needed to know exactly which files were to be verified.
For example, the <ab>
element—which stands for “anonymous block”—was only present in the Nisko collection, and only occurred six times, associated with a @type="dinkus"
attribute (fun fact: a “dinkus” is a typographic symbol used to divide the text into sections). In order to understand why the <ab>
element was chosen and figure out whether or not it should be included in the final ODD, I needed to know exactly which files contained this specific element.
Another example that required further consideration was the <distinct>
(linguistically distinct) element. It appeared few times in only two collections (BeGrenzte Flucht and Early Holocaust Testimony) with two possible attributes: @type
and @xml:lang
. The script returned a list of ten files to investigate, allowing me to realize that the editors had used <distinct xml:lang="..">
instead of <foreign xml:lang="..">
. This led to recommending the <foreign>
(foreign language) element in the ODD.
I wrote five searching scripts in total that parse the corpus and, for a given element or attribute, return a list of:
- Files in which the element appears (1), is missing (2), or appears more than once (3);
- Files in which the attribute appears (4);
- Attributes associated with the given element (5).
My goal now is to merge all the searching scripts into a unique one with a function for each previous script, such as this:
Creating the overview table
As I have previously mentioned, the results of the extracting script allowed me to create a thorough overview table with the aim of making the ODD easier to write, and helping the EHRI editors to homogenize their collections afterwards.
The script’s results were sorted into four files—one for each collection—and in two separate lists, depending on whether the element appeared in the <teiHeader>
(metadata) or the <body>
(text). In the overview table, the elements are ordered alphabetically by tag name, and I added check-boxes to mark the part of the file in which the element occurs (<teiHeader>
or <body>
).
For each collection, the number of files is indicated between parentheses—BeGrenzte Flucht: 104; Early Holocaust Testimony: 119; Diplomatic Reports: 72; Nisko: 40—and for each element, I filled in the total number of occurrences per collection.
When I started thinking about the specifications to include in the ODD, I added the “list of attributes” column. First, it allowed me to better understand the use of attributes in the editions, but most importantly I could determine in which cases the values of said attributes should be restricted with a list of closed or semi-closed values—as opposed to an open list. Opting for a semi-closed list of values is an efficient way to avoid spelling mistakes like "suject"
or "subeject"
instead of "subject"
for instance.
With the columns previously mentioned, I had a good idea of the editors’ encoding practices and the number of occurrences made it very easy to see which elements were abandoned throughout the project—for example, the <emph>
(emphasized for linguistic or rhetorical effect) element only appears in the first edition (BeGrenzte Flucht) and was rapidly replaced by the <hi>
(highlighted, graphically distinct) element. The “ODD EHRI” column consists of boxes that are checked only if it is relevant to include the element in the final ODD.
As I have previously explained, some decisions required more thinking than others, and further explanation. The “comment(s)” column allowed me to formulate recommendations for the editors, to explain why an element is not relevant, and also to provide a solution adapted to their needs. In the case of bibliographic citation, there are four elements available in the TEI: <bibl>
(bibliographic citation), <biblFull>
(fully-structured bibliographic citation), <biblScope>
(scope of bibliographic reference), and <biblStruct>
(structured bibliographic citation). At first, the editors tried using <biblScope>
, but the <bibl>
element is more suited to their needs as it is more lenient in case some pieces of information are missing, and thanks to the searching script the names of the files containing <biblScope>
elements are listed in the “files involved” column.
Overall, the overview table serves three purposes:
- It allows me to keep track of what needs to be developed in the ODD, which is quite a long process—and it is normal not to remember everything at every moment.
- It allows the editors at EHRI to know what needs to be changed in the encoding of the already existing digital editions.
- As the table can be modified, the editors will be able to add a column for every new collection and check that they use the correct elements.
To conclude, writing an exhaustive ODD is a long and meticulous process in which some important preparatory work is necessary. The actual writing of the documentation and the specifications is also a long and meticulous process in itself that requires a good apprehension of the tools provided by the TEI-XML standard, which I will develop in a future post.
OpenEdition suggests that you cite this post as follows:
Sarah Bénière (June 5, 2023). Writing an ODD for the EHRI Online Editions — Preparatory Work. Digital Intellectuals. Retrieved December 8, 2024 from https://doi.org/10.58079/nmzk
Recent Comments