Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Storing and sharing the project

I have been working on the DAHN Project for more than a year now: I executed a lot of tasks over the year and developed the project extensively. All of this work represents many files created – be they programs, transcriptions or documentation. It is then essential to store them properly and smartly, to find them as soon as needed or to make them easily available for developers, digital humanists and other interested people.

I – Storing private and limited access files from our project with Sharedocs

A – What is Sharedocs?

Sharedocs is a file manager tool developed by Huma-Num, a French research infrastructure dedicated to humanities and digital humanities and developed by the Minister of High Education, Research and Innovation. It offers multiple digital services for research programs and projects. Sharedocs is suited for research projects that want to store, exchange, share and work on file type data (images, texts, etc.).

B – What can be found in the Sharedocs?

As a tool accessible via a Huma-Num account, e.g. having access and an agreement to use a dedicated space, Sharedocs is a restricted access space, shared between selected persons (usually members of a project). In accordance with this, the files stored are mainly files that need to stay private and that can only be shared within the project. This concerns, for instance, in our case, the images that are used to do the segmentation and the transcription of the corpus. The pictures we used were taken by a member of the project for another purpose and since these images were not commissioned by the Departmental Archive of Sarthe that keeps the original documents, we, as scholars, do not have the right to freely distribute those images. Also added there were all the files that composed the Berlin Intellectuals website, retrieved via its previous host, the Trier Center for Digital Humanities. It contains all the elements that run the website: XML files, images, HTML, PDF, scripts, transformations scenarios, etc. Those files are also useful because they served as example and landmark when developing the web application for the digital edition of the ego documents.

II – Sharing the content of the project while progressing with GitHub

A – What is GitHub?

GitHub is “a provider of Internet hosting for software development and version control using Git”1, which is a software that allows collaborative work between programmers by tracking changes in any kind of files. It is mostly used to host open-source project but it is also possible to create a private repository, only available to selected people. It offers many features, like bug tracking or continuous integration, and it is also practical due to the collaborative tools it offers.

A repository may belong to the person who created it, but it can gain many collaborators. First of all, it offers the possibility to report issues in the project, whether it is to ask about a specific part of the project or to report an error thats has been spotted. After mentioning this error, visitors also have the option to propose a correction themselves, by forking the repository (i.e., obtaining their own copy), correcting it and sending it onto the original repository with a pull request. It allows a member of GitHub to do a correction directly onto the project, but under authorization of the repository’s owner, who will have to accept and merge it in the repository. All those changes and modifications are registered through commits, labelled and numbered, thereby tracking all the versions of the project and making things easier for project development, especially if it is completed by several people on a unique repository.

B – What can be found in the repository?

The repository GitHub of the DAHN project is divided in two big folders: “Correspondence” and “Project development”. One is about the results and concrete part of the project, while the other contains all the elements necessary to obtain those results. Like every good repository, it is also accompanied by a README file, which is used to document the content of the repository. In our case, it presents the project, as well as the content of the folders, in more or less detail.

Github repository tree

1) Observing the results obtained on the project: the “Correspondence”

The core of the project: the corpus

a. Paul d’Estournelles de Constant correspondence

At first, this folder only contained one corpus: Paul d’Estournelles de Constant’ correspondence. It stored the letters, once I finished transcribing them, usually by batches of ten. This correspondence needs to be stored in a GitHub repository because it is in constant evolution. Firstly, I am still in the “transcription” phase of the project and I haven’t yet starting to work on the annotation of named entities, which will require many modifications on the encoding. Then, it is also possible, after proofreading the transcription, that I find errors in the encoding I had not previously noticed, involving some modifications to make. That is when GitHub is the most useful, because, as I explained, it tracks all of those changes and the collaborators of the project can see the modifications that have been made and comment on them or ask to modify them in case they object to them.

b. Letters and texts from the Berlin intellectuals

More recently, I started to add in the GitHub the transcriptions from the Berlin Intellectuals corpus that I already mentioned in a previous post. It was supervised by one of the head of the DAHN project and it needs to be added to our digital edition, but not before some changes and updates regarding the TEI Guidelines. Indeed, the corpus has been encoded a few years ago and since then, the TEI Guidelines have evolved: some parts of the XML tree from the corpus are not compliant anymore to what is expected, whether it is the header and its metadata or the content of the body. It is essential that the tree is compliant with TEI, but also with the encoding guidelines we established for ego documents, which this corpus is now part of.

 

Additional to the project: the indexes

A corpus is usually also accompanied by indexes, used to references all the named entities present in the corpus. Just like the corpus, the two groups of indexes have not reached a homogenous level of achievement. For Paul d’Estournelles de Constant’s corpus, I barely have any information gathered for the indexes that were constituted, i.e., people, places and organizations. Every one of those indexes contains two to five entries, because for the moment, the annotations have not been made. Those indexes are only there to serve as example of what we will have on a larger scale later in the project. Contrary to this, the indexes of the Berlin Intellectuals are logically already fully developed: they have been accessible and consulted via the website over the last 9 years. However, just like the files of the corpus, they also need some updates and complements of information. For every index in the group, new data has to be added to comply with the encoding guidelines and with the framework of the website we are developing for the digital edition of those corpora.

 

Rules for encoding the project: the guidelines

Finally, the last important part of this “Correspondence” folder are the guidelines that document the rules of encoding of ego documents, which are at this point the core documents edited in the project.

I mentioned in three previous posts the reasoning behind my XML tree, the difficulties I sometimes encountered and the steps I had to take to encode my corpus. This is something that many people must do before starting a project and I thought it would be helpful to document the choices I made in terms of XML tree creation and content, so that the next person who will encode a corpus of ego documents won’t have to go through the exact same trouble from scratch.

Those guidelines can be seen as a sort of continuation to the TEI Guidelines, but for the ego documents explicitly and with some editorial choices made when encoding the d’Estournelles corpus. They are still evolving, because while working with the two corpora, it is possible to find errors or elements of improvement that can be corrected or made in the corpus, then added in the guidelines.

Those guidelines are long and intend to be thorough: they describe each and every aspect, from the title in the <teiHeader> to the value proposed for the @rend attribute in a deletion tag (<del>), the constitution of the tree, the chosen tag, the ways to use it and generally an example taken in one of the corpora. It is written hierarchically, from the start of the tree to its end, one element after the other, which makes it easier when looking for a precise element of the tree. It does not stop to the rules of the corpus, because it also establishes the content of the index, whether it is for people, places, organizations, contributors or bibliographies.

Lastly, this documentation is written in XML but in the GitHub, it is also possible to find the HTML version, as well as the PDF format.

2) Sharing the mechanism responsible for the results: the “Project development”

Fast tracking the realization of the project: the scripts

The DAHN project is developing a pipeline for a scholarly digital edition of ego documents, divided into six steps: digitization, segmentation, transcription, post-process correction, encoding and publication. To achieve this goal, we are using a corpus of almost 500 letters, each of them more or less long. If I had to do it all manually, it would take forever. In order to speed up parts of the process, I created over the year multiples scripts with different purposes: transcription, encoding and correction. The script contained in the “transcription” folder is not used in the project, since I am working on the eScriptorium interface, but it is usable if needed. The two other folders are the most used, one for the “post-process corrections” part of the pipeline and the other for the “encoding”. Those scripts have already been mentioned in previous posts, when presenting the step for the transcription and encoding of the corpus. Just like the corpus, those files are still evolving, whether it is to improve the writing of the script or to adapt it to new development in the guidelines.

 

Creating suitable models for the files: the training

An important part of our pipeline are the segmentation and transcription, done with the help of the eScriptorium interface for Kraken, an OCR software. To complete those tasks, I had to develop models for segmentation and for text recognition, by creating ground truth and doing some model training, as I explained in this series of posts. In order to be as transparent as possible about the process I went through to obtain the transcriptions and to give examples of the results it can produce, I uploaded the models, the logs of the training and the ground truth used, even if they won’t have any modification once they are created. However, it can still be expanded, because while developing the transcription, new ground truths are created and opportunities for a better model can arise, which will then have to be added in the repository.

 

Helping reuse the files: the documentation

Similar to the guidelines that are conceived to help people encode their corpus of ego documents like we did ours, this documentation is made to help people develop their corpus and concentrates on other parts of the process. It does not contain one file, but several, written in markdown or as a Jupyter notebook that is more practical that the other files. The goal of this documentation is to motivate people to reuse the scripts I created, to change and adapt them to their needs, even if they don’t really know how to work with Python and scripts.

The markdown files are presented as a step by step process, that are referring to some of the scripts, to do a transcription, correct a text post-OCR or encode a corpus.

The Jupyter notebook is more detailed, as it is an exhaustive incursion in the “text tagging” script, presenting almost line by line how to operate it and using a letter from our corpus as an example.

C – What to expect with the GitHub repository?

Advantages and disadvantages

Whether it is with the content of the corpus or the files used in the project development, everything can be reused by a visitor of the repository if needed. Some documents can be modified and adapted to what is needed and the open access to the files makes it easier, because it is not necessary to ask permission before downloading the file. Even more, GitHub offers the possibility of collaboration, even for people not working in the project, by using the issues and pull request. We can also share our data onto other repositories, like it is the case for DAHN. For example, the ground truth can also be found in a GitHub organization created to centralize ground truth for HTR and OCR: HTR-United.

Only negative point of all of that, the ground truth of the corpus (as much in the repository that in HTR-United): everything in our project is under free license, except for the images I used to produce the ground truth. As I explained previously, we cannot give free access to those images. So, if someone wants to use the ground truth, for one of their training, they will have to contact us to obtain those images, under some copyright rules.

 

Future development

This post describes the content of the GitHub such as I have developed it over the past year. However, just like the files it contains, the repository is in constant evolution, new kinds of files can be added if they seem relevant for the project’s progress. Among the future additions planned for the repository, I can mention the missing transcriptions, because the images have not yet been treated or because I do not have them. The indexes are still severely lacking information, whether it is those of the d’Estournelles corpus, that need to be enriched, or those of the Berlin Intellectuals’ that need to be corrected. The d’Estournelles letters will also be modified in order to add the annotation of the named entities (which will be paired with the indexes). I am still waiting for the digitization of the D’Estournelles corpus, but once this is done, the repository will probably store the IIIF manifest. Lastly, we are still in development of the web application by TEI Publisher mentioned here and we will probably add some parts of the application in the repository, in a new big folder. Other types of document, not planned for now, could also be added if we decide that they are relevant to making the development of the project transparent.

Additional resources

Documentation Sharedocs: https://documentation.huma-num.fr/sharedocs-stockage/

GitHub: https://github.com/

Documentation for GitHub: https://docs.github.com/en

Repository of the project: https://github.com/FloChiff/DAHNProject

Repository for ground truth from the project: https://github.com/HTR-United/dahncorpus

  1. https://en.wikipedia.org/wiki/GitHub#Scope []

OpenEdition suggests that you cite this post as follows:
Floriane Chiffoleau (April 19, 2021). Storing and sharing the project. Digital Intellectuals. Retrieved December 8, 2024 from https://doi.org/10.58079/nmz0


Floriane Chiffoleau

I am a PhD candidate in digital humanities in the ALMAnaCH team at the French Institute for Research in Computer Science and Automation (INRIA) and in the 3.LAM at Le Mans Université.

You may also like...

1 Response

  1. 02/08/2021

    […] mostly information about how you can add data and the way to do it. As I already mentioned in my last post, I wrote guidelines for ego documents specifying rules of encoding for those types of documents. […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.