How to produce a model for the transcription
My work on the DAHN project can be divided into two major parts: transcribing and encoding. I already mentioned the encoding in two previous posts (there and there) and now it is time I talk a little bit more about the other part of my work.
In order to encode and share Paul d’Estournelles de Constant’s letters on the platform that we will create later on, I need to make them available in a format suitable for encoding. Right now, except for some of the letters that were transcribed and edited in an abridged version for En guerre pour la paix1, the only format I have at hand is that of digital photographs of the letters. They were taken at the Departmental Archive of the Departement Sarthe, where the collection is held. Although it does not provide a facsimile version that can be displayed on the platform, it allows to work on the documents in order to produce a text format that can be the basis of the encoding.
There are several steps that need to be taken to do so – some dependent on the results of others – which ultimately provide us with a transcription. I will not talk about the whole process in this post, but only of the early portion, some parts not always being essential to the process, but in our case, quite useful.
Choosing an OCR software
The first and most important part of a transcription is finding the right software to execute it. There are plenty OCR (Optical Character Recognition) software on the market today: some are proprietary and some are free, and they all have specificities making them more suited for one job over another. For our project, we use Kraken, an OCR system derived from OCRopus, an older OCR system.
There are two significant reasons behind this choice. First of all, it is a well-developed free software with extensive documentation (on a website and on a GitHub repository). It is fairly easy to install directly on a computer or on a virtual environment. It takes on the preprocessing steps of binarizing and segmenting the images before performing the transcription, all of this using provided pretrained models, or enabling the user to train customized models2. The software also offers many options for tailoring the transcription process or the training.
Secondly, Inria’s ALMAnaCH team previously hosted Kraken’s main developer. On the one hand, having kept good relationships with the developer of the software means it is easier to present and resolve issues, be they technical or methodological. On the other hand, the team has therefore gained experience with the software and can ensure it is adapted to their projects.
Learning to work with Kraken
Once the software was chosen, I was able to begin. After setting up a virtual environment on my computer (as explained on the website) and retrieving several models, I tried a simple command on my terminal to binarize, segment and ocr one image of a letter, using one of the provided pretrained models, in order to obtain a text.
Before starting this, I had made two assumptions:
- Hypothesis 1: I test my photos with the models provided with the software to see the quality of the transcription. One of them gives a satisfying result and I can work with it to transcribe our corpus.
- Hypothesis 2: After testing the models, none gives a conclusive result and I have to train my own model, with the training data that I’ll also need to create, in order to have a model suited for our corpus of images.
After running the initial command, only one model gave a result close to what was expected. While it identifies most of the characters, it was not good enough to be used as is for the rest of the corpus. This model was trained for English text: it recognizes Latin alphabet, but ignores most of the diacritics used in French.
As a conclusion to the first test: I had to follow the second hypothesis I made and collect the letters’ transcriptions and associate them with the correct images.
I had two options, which framed the future training experiments: either fine tune the best pre-trained model (to a point where it now recognizes diacritics and, possibly, French words), either train a model from scratch. In both cases, I needed to create a lot of training files.
Identifying our training data
The next step to achieve the transcription is the creation of training files. Their content depends on what we are trying to do. With Kraken, it is possible to train for the segmentation or the transcription, which will require different types of data. In our case, we try to train our model to recognize new characters and specificities of the letters and their author. Those data will then have to be about the content of the text and it will demand a set of two types of documents: images and texts. In order to do that, we have to pair lines from a document and their corresponding transcription, and the model will train accordingly to recognize a character and its text format equivalent.
On that subject, there are two approaches:
- An old school version, the “legacy”, which literally consists in training files from pairs “one image = one line of transcription”
- A new but better developed version that used an ALTO XML or PAGE XML format and that consists in one file linked to one image that referenced all of the geometric coordinates of its content and its layout structure in order to allow an image and text overlay.
The second approach is preferred today because it reduced the number of files and there is a better accuracy.
Whether we choose to use the first or second approach, those training files require a great number of lines in order to be able to operate because it is necessary to have recurrence to discern a pattern, and so it really works with a minimum of 1000 lines, which is about 40 pages of text (approximately 25 lines per page).
It can really be arduous to create those training files because transcribing 1000 lines of text is a long task. However, it can be made easier if we are lucky enough to already possess transcriptions of part of our images. Since, as mentioned earlier, some of the letters were transcribed for a printed edition, we are among the lucky ones. We possess both the photographs of the letter (the image) and the transcription (the text), which greatly facilitates our work. At this stage, producing training files mostly consist of copying and pasting the text in the appropriate interface in order to align images and transcriptions, and then extract this in the format expected by Kraken (ALTO XML files).
Learning to work with eScriptorium
Now that we have identified what we will use for our training data, we need to find a way to create them. From there, there are two processes for us: one that starts with the command-line interface (CLI) (the “legacy” approach) or one that is fully executed with a graphical user interface (GUI) (the “eScriptorium” approach).
In the “legacy” approach, we use Kraken in the command line interface to create HTML files displayed on the left side our image, on the right side a form where each line of text detected in the image can be transcribed. The document can be saved in order to transcribe it in several phases. On the interface, a red rectangle indicates on the image which line is filled, and the transcription appears in a different colour once a line is completed and entered.
Once the transcription form is filled, Kraken can process the HTML files and “extract” PNG and TXT files. These TXT-PNG pairs are then the so-called “training data”.
In the past year, a GUI has been developed for Kraken: eScriptorium. It does not intend to simply be a GUI for Kraken, but, rather, its ”goal is to provide researchers in the humanity field with an integrated set of tools to transcribe, annotate, translate and publish historical documents”3. Managed by Daniel Stoekl, financed by SCRIPTA and developed by Benjamin Kiessling (developer of Kraken), Robin Tissot and El Hassane Gargem, it enables the user to binarize, segment and transcribe documents. eScriptorium can be used to transcribe a text manually or, with the integration of Kraken functionalities, to automate some tasks. At every step, we can export the data in three formats: ALTO, PAGE XML and text. The interface is divided in three tabs: Description, Images and Edit. The first one is only there to name the project and establish the parameters and metadata for the project. The two others are the most useful.
The “Images” tab gives an overview of the project; as is visible in the screenshot, we can see the status of the transcription, we can binarize, segment and transcribe the uploaded images, we can select or unselect one or multiple images, and we can export them. That’s where we are able to see our progression on the project. In other words, the “Images” tab is where we manage the collections of images and run the tasks.
The “Edit” tab displays the interface allowing to manipulate an image and its transcription. Three views are available: the source image, the result of the segmentation task, and the transcription. It is possible to edit the segmentation (by adding new lines, removing some, cuting or merging them) before we start to transcribe the text accordingly, with one segment corresponding to one line to transcribe, as shown on the screenshot below.
Once this task is completed and we have our full set of images and transcription, we can export it, in the appropriate format. We now have a folder containing a dataset that can be used to train a new model from scratch or fine tune an existing model.
Writing a script to build a model
The next step is the redaction of a script that will be executed in the terminal to launch the training of the model with the data provided. At this stage, it becomes fairly easy because the work will consist in writing, executing and changing the script until we have a conclusive result for the model. All the commands necessary are documented on the Kraken website, or – for the latest features – available in the ketos script (Ketos being Kraken’s training module).
I decided to write two scripts for this step:
- one that creates a model only from the training data – i.e., from scratch
- one that modifies the almost conclusive model from my first experiment, retraining it with our dataset – i.e., fine tuning
My intention is to execute both of these scripts and then test which of the two models thus created gives the best result for the transcription.
Learning to work with the cluster
As I previously said, to create an effective set of training data, it is required to have, at minimum, a thousand lines transcribed. This ensures that there will be enough information for the software to create a new model. However, during training, this is a lot to process for a computer. This means that, in order to effectively and rapidly train a model, or even be able to execute the task again and again until the result is finally fitting, it is necessary to have enough power to run our script, which is why I am going to use a service provided by INRIA – my institute – to quicken the calculations: the cluster.
A cluster, in computing and data sciences, is a group of interconnected computers working together as if they were one single system. It is constituted by a node set that perform the same task simultaneously, which provides a much faster execution. Using the cluster will become very useful when I start working with the training set and start running longer training (with longer and more numerous epochs).
An epoch corresponds to an iteration during a training. Each pass of the training set creates one model. One epoch after the other, the system slowly learns more about the data to create a more efficient model. With Kraken, a model’s efficiency is defined by its accuracy, which is, for the transcription model, based on the Character Error Rate (CER).
This requires numerous epochs. For example, in a failed attempt, the training went through 31 epochs. On my computer, this failed attempt took an hour and a half, while it would probably have taken only five minutes on the cluster. The cluster is an important tool because without it, a training that fails (like it was in my example) is likely to be a huge waste of time for an inconsequential result. So, once I am able to use it to test my training configurations and rapidly see the outcome until I get a satisfying result, the transcription can begin.
I now have a defined process to establish a model transcription that I will be able to use on my war correspondence corpus and also on other future corpora, since it is a generic process. Some parts can turn out to be more laborious than others – the training files can require a lot more time if the corpus is not already partially transcribed; the creation of the model will take forever without a cluster, even more so if the training data are voluminous – but ultimately, I will have an adapted model. This one will work for the letters but also for others similarly written documents (typewriting with straight lines), which makes it particularly valuable since it is a rather common type of documents for the 20th century.
- AKHUND Nadine, TISON Stéphane, En guerre pour la paix, Correspondance de Paul d’Estournelles de Constant et Nicholas Murray-Butler, Paris, Alma, 2018, 546 p. [↩]
- This customization is a feature available within Kraken, since the user can produce the training data (also called “ground truth”) for both the segmentation and the transcription. The training is then conducted either starting from scratch, either finetuning a pretrained model. [↩]
- https://gitlab.inria.fr/almanach/lectaurep/escriptorium/-/tree/master [↩]
OpenEdition suggests that you cite this post as follows:
Floriane Chiffoleau (May 7, 2020). How to produce a model for the transcription. Digital Intellectuals. Retrieved October 6, 2024 from https://doi.org/10.58079/nmyp
1 Response
[…] in addition to a transcription model. This post will talk about many elements already mentioned in this article so some specific parts will not be developed because it has already been explained […]