Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Difficulties in creating the transcription model

In my last article, I presented and detailed the early stages for the production of a transcription model, explaining the technical method to follow in order to do so. I will now develop another part of this production, which focuses more on the content that needs to be provided to the training in the interest of getting a model as accurate as possible for our transcription.

Before producing any transcription of a document, it is essential to have a suited model for the corpus and it is usually necessary to produce it yourself, because the software that will be used (Kraken in our case) doesn’t always have what we need. Indeed, corpora often variate from one another: the documents can be handwritten or typewritten, the font can be regular or particular (like with the German font Fraktur) and even if the alphabet is the widespread Latin alphabet, some forms can have unique letters, diacritics or other particular parts. It is then a logical step to produce our own model, specific to our corpus.

In our case, since we are training a model by using our letters as training data, the model will be a match for our corpus but, ultimately, it could also be used for other corpora with similar structure, as I already mentioned it in the previous article, because typescript documents with regular fonts are widely seen in the 20th century.

In theory, our model is pretty easy to create because typewriters have only one font, so that there is a limited number of characters that will have to be learned by the model. Texts are also written with regular and constant lines, as it can be seen in the picture below, so it should facilitate the learning process. All I need then is enough training data – about ten letters of 5-15 pages with approximatively 25 lines on every page – and the model will learn everything it needs to know to transcribe my corpus with a near perfect accuracy.

Paragraph from one of the corpus’ letter

However, this is only true in theory. In practice, there are many little details in the corpus that complicate the training of the model and they need to be thoroughly included to have a really accurate result.

To create my model, I started by producing training data with about ten standard letters, with no particular elements about them except the fact that they were among the ones that have already been transcribed, so it was a quicker task (I copy/pasted the text instead of writing all the lines myself). That gave me approximatively 2.000 lines to process for the OCR software and after a training on the cluster, I had a model with a reported accuracy of 97.28%. I decided to test it on one of the letters to see the quality of the transcription and this is showed me the limits of the training done and the data given.

Transcription with the model of 97.28% accuracy

Most of the words from the paragraph have been correctly transcribed but things became complicated when it came to more specific parts (like the ones that I talk about for the encoding in this article). This result does not match with the high accuracy announced during the training. This can however be easily explained: during the training, the data are separated into two sets, the training set and the validation set. The software trains the model with all the characters that it finds in the training set and then, at the end of every iteration, it tries its recognition level on the validation set. It continues the training until it considers that its recognition level is good enough with the validation set. In our case now, even though it does not show it in the transcription test, the software considered that the model was 97.28% accurate with the validation set. This is due to the fact that all the data that I submitted for the training are pretty much the same: the characters don’t really vary and there aren’t a lot of little details like the ones in the illustration above. Consequently, all those data have been separated into two very similar sets and so during the training, the software could achieve a high accuracy because it wasn’t bothered by all the little details that could have lower its accuracy. This is why it was a good model in theory.

Once I realized that there were particular elements that were not recognized by the model, I decided to create new training data where they would be the main focus so that my model would learn them and be able to recognize them. In order to do so, I had to clearly identify these elements and subsequently I found six elements that needed to be properly trained:

  • Handwritten writing
  • Uppercase
  • Numbers
  • Line spacing
  • Diacritics
  • Overfitting
Handwritten writing

There are two types of documents for an optical character recognition model: printed/typescript and handwritten. The latter is also called Handwritten Text Recognition or HTR and it is a much more complicated model than what we have because it needs to process and recognize the writing that is submitted (job eased by some tools set up for this task only, like Transkribus).

In our corpus, the documents are typescript so we shouldn’t have to worry about that, except that it is not always true. Paul d’Estournelles wrote his letters on a typewriter but it is likely that he proofread before sending them and sometimes, found errors or missing elements and instead of rewriting the page, he decided to correct it directly with a pen. This is why we can sometimes find words that have been struck through and rewritten above, or handwritten sentences in a postscript at the end of a letter.

Paragraph with corrections made by d’Estournelles

Those very specific parts of the letters need to be considered when creating our training data because if they are segmented and transcribed like the rest of the text, it is essentially like saying: “Those characters are the same as all the others in the text”. It could create a confusion in the model, because an “a”, “s” or “r” of a typewriter will not usually have the same structure as that of cursive handwriting (as it is clearly shown in the illustration).

It is then necessary to inform the model that the words it reads and learns from are not written like they usually are. For my training, I decided to notice it by using the “£” sign (that I doubled to present it clearly to my model), which is convenient in my specific case because it is a sign that is never used in the letters so there is no chance for the model to read it in the text and confuse it with the handwritten parts. This is why I chose this sign but it is possible to choose any other that may be more convenient for the corpus like “€”, “$” or “π”. From that point on, there are two courses of actions: put the handwritten words between two “££” signs or simply put the two “££” to indicate that there is a handwritten word there. This choice then will depend on the desire to create or not an HTR model.

A transcription model is created to transcribe quickly and easily a text. If a large part of that text is handwritten, it will be logical to create, in addition to the typescript model, an HTR model to be sure that all the words will be translated in the text and that the manual correction will be as minimal as possible. In that case, the right option is putting the word between two “££” sign and work with that afterwards in order to make the two models.

However, this is not the choice that I have made for my model. After considering it, I realized that the portion of handwritten text in the letters is not really big and that creating an HTR model for Paul d’Estournelles writing would be a waste of time, because it would be much quicker to transcribe the handwritten parts of the text myself than to create and apply an HTR model on those parts. Thus, for our model, we will only notice that there are handwritten words, which will teach it to not try to translate them.

Uppercase and numbers recognition

A model, when trained for a corpus of letters or documents similarly structured, mostly learns lowercase letters and words because that is the majority of what it will find in the data. It will only recognize some uppercase because it finds them regularly at the beginning of sentences. However, this will be problematic for our model since it needs to recognize all the capitals letters, as well as uppercase words. Indeed, Paul d’Estournelles used many, especially when writing the title for his letters and sometimes, those titles are very long, like we can see in the example below, which means that more letters are unrecognizable by the model if it is not properly trained. D’Estournelles also used capital letters to mention people in these letters and cite many organisations that require capital initials. Thus, instead of the few that we would regularly have on a text, we will have a lot more and in all the corpus letters, so it is necessary to train the model to recognize them.

Moreover, uppercase are not the only particular element more present in the letter than usual. There are also many numbers and even though there are fewer than the capitals, we find them in the letter numbering and the dates, like in the picture below, and in the page numbering, so they will be new numbers with every letter and it is important that the model learns them too.

Opener of a letter with multiple uppercase and numbers

So, to make sure that our model knows how to recognize those special elements, we need to find a way to add them to our training data. This can be done in two ways: either we have enough of those data in our corpus and we use that, or we create artificial data.

In the event that we can’t find enough data in our corpus to properly train the model, it is possible to create data, by finding the font that matches our texts, writing on a page what we need to write in order to train with the data we want and modifying the page to make it look like it always belong with our corpus. For example, with our corpus, it would mean finding a typescript font, writing in a purplish colour and tarnishing the paper a little. That way, during the training, the software wouldn’t differentiate the corpus data from the artificial data.

Nevertheless, it is a last resort solution for those with not enough corpus data to properly train their model, which is not our case. In the 700 letters or so that we have, all we need to do is take first pages of several letters so that we have lots of dates and titles with plenty of numbers and uppercase and then, search in the corpus to find pages in letters where uppercase are highly used, as well as numbers. Luckily, there are such data in our pages, enough to add significant numbers of lines to the training and help the model in its learning process.

Difference in line spacing

As I previously said, in theory, one of the reasons for which a typescript corpus model is easy to train is because it is made of constant and regular lines but just like before, it is not always true with the letters of Paul d’Estournelles. While it is not as frequent as the uppercase and numbers, there are some cases where d’Estournelles, instead of writing his letters with the structure he normally uses, decided to make more compact paragraphs, usually when writing a postscript, like in the illustration below.

Difference in line spacing between the top and bottom texts

In those situations, we don’t have anything really different from what we normally have: there are no special characters and the font is the same as before. The only difference is that the line spacing is small or even inexistent and this can be a problem because the model is used to see a pretty regular segmentation and learns to read it this way, but with that irregular line spacing, it can have trouble reading the words when confronted with them. Just like before, the best way to train the model to better read those parts is to find similar parts in the rest of the corpus or create those data, even though, if there is not much of them available, it is possible to let the model transcribe as much as it is able to and subsequently correct the transcription when it was not capable of correctly reading the words.

Diacritics

While we select our training data for the model, there is another element that we need to consider as important: the presence of diacritics in the text. That particular thing was already presented as a restraint when we tried to use the English model for our transcription because there is a considerable difference between the two alphabets since there are no diacritics in the English language while the French has many. It is the main reason why we need to train a new model, so it is important to consider them in our training.

Some diacritics like the grave and acute accents don’t need to be thought about too much. Their large presence in the day-to-day words like those employed by d’Estournelles makes them highly present in the training data, whether they are lowercase or uppercase. For the circumflex accent, it could be necessary to choose parts of texts where they are largely present so that the model learns them, especially with uppercase.

This is a work that can be doubled with the search for new training data for the capital letters, because it means finding enough capitals to train the model, but also finding some that have diacritics on, so that the model is as comprehensive as possible with the alphabet that it registers.

Overfitting

Finally, there is a phenomenon that exists in machine learning named “overfitting”. It occurs during the training and it means that the model starts to memorize the training data instead of learning from it. It is not something to encourage because it is not good for the model. There are techniques to prevent it, like early stopping where the model training stops when it doesn’t learn anything anymore. However, I decided to use overfitting for some parts of the corpus because I realized that it could be of help with our model. Indeed, when we read letters from the corpus, we notice that there are some similar elements found in every one of them, they are always written in the same way and the phrasing doesn’t change. This is mostly due to the fact that the letters are always addressed to the same person, so we find those particular sentences in every letter, like the example we can see in the picture below.

Recurrent line from the corpus

There are only 3-4 recurrent sentences in all the letters but I decided that I wanted my model to know them almost by heart so that in every transcription, even if it makes mistakes with some words or dates, it will always transcribe correctly those parts. In order to do so, I made sure that in every extra training data that I created for the capitals and/or the numbers, I also segmented and transcribed the recurrent lines when they were present on the page to add them to the training and result in the overfitting.

This technique may however cause some problems in the aftermath if the model is used for others similar corpora, because the overfitting is adapted to Paul d’Estournelles’ letters but could be confusing for others texts. This is something that will have to be dealt with during the finetuning of the model.

This list of elements that can hinder the creation of an accurate model if not properly considered is not exhaustive because I only mentioned problems that I, myself, have encountered when trying to develop my transcription model. Since all corpora have unique features, it is possible that other special elements need extra consideration and training in order to perfect the transcription model when working with another corpus.

For now, with my own data, I will have to continue to train my model, while making sure that all the specific elements have properly been taken into account in the training set and in the validation set, until I have a model with a high accuracy and a transcription that doesn’t require many manual corrections.  


OpenEdition suggests that you cite this post as follows:
Floriane Chiffoleau (June 17, 2020). Difficulties in creating the transcription model. Digital Intellectuals. Retrieved September 17, 2024 from https://doi.org/10.58079/nmyr


Floriane Chiffoleau

I am a PhD candidate in digital humanities in the ALMAnaCH team at the French Institute for Research in Computer Science and Automation (INRIA) and in the 3.LAM at Le Mans Université.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.