There might well be a right time to structure one’s data after all
How do you know when the time has come to structure your data? If you do it too early, you might miss something and regret it afterwards. If you wait too long, you might get lost in time or irrelevant tasks. But settling on a model means reducing drastically your research options with your corpus. The risk factor is not small. Once a decision is taken, it is almost impossible to go back. Still, being too much of a universalist is not a viable solution, because there is very little you can do with data that is solely rich. For one thing, you will not manage to get much of it. And it will hardly be interoperable. So you have to develop a data model. You have to do it at the right time and it supposes that you reduce your research question to comparatively simple analysis elements in order to structure your digital corpus.
In other words: you have to play dumb, because something that is formulated in a straightforward way is generally considered plain stupid by traditional humanists. Experimental method (for instance in my field, but I know that it is true for other disciplines here in Germany) is not considered valid. You have to first know what you are looking for and preferably also where you are looking for it. Then your work consists in coining a few concepts that will pretend to be giving an answer to what was not a question anyway. I might be exaggerating a bit (although – this is how I worked for years and I consider in retrospect that this really is what I did), but I am trying to give a sense of how a so-called research question works and how different it is from developing a data model.
To my eyes, research question and data model should overlap, and it should be an important part of the research process to explain to what extent this is the case.
But explaining the importance of data modeling for the development of your research question turns out to be quite challenging. I have tried two approaches in my last two papers (one is online already and one will be published in the winter, both are in German) and find myself trying another approach for the paper that I will deliver next week at the Scientia Quantitatis Conference. I am still not so sure that my paper will be as fundamental as it is intended to be. And because I am a bit insecure about that, I will overload my presentation with illustrations (I might all sound stupid, but at least my public won’t be bored). On the other hand, when I presented my data to the nodegoat team this morning, I was surprised at how structured the objects were that I was describing (our xml files and data architecture) – and at how structured my own methodological approach is. When did that happen? Not so long ago, I was just toying around with a blurry concept of “intellectuals”?
I think that this very positive result (which I will probably never be able to sell as an accomplishment to my disciplinary community, but what the heck) is the result of collective wisdom within the research group. None of us, least of all me, would have been able to realize this by him or herself, but all of us have learned (I hope) how to gain advantage from this experience for future works.
We had different roles in the process. I have always had some kind of very generic vision of where this was heading to (even if at times, if felt like moving forward blindfolded in a tunnel). Alex, who ran all the technical background, always had the ability to describe the effects of any taken decision up to the last consequence for the totality of the edition (apart from the fact that he always was amazingly flexible when it came to implementing our decisions, and always found a way to make things possible). Two experts in the field of editorial theory always were able to bring back on the table quality criteria in text establishment. Two other persons were always ahead of the rest of us in terms of understanding how the TEI ticks. And every one of us had their idea of what this could look like in the end. Data modeling began for us when we started to agree on point after point, when all of these requirements could be brought to a common point – which was not all at a time, but developed over the years. I would say that it took about two/two and half years to have a stable data model. We are still refining, but there will be no major changes now.
We were lucky in several regards. First, we actually could agree on point after point, gain a general structure, refine it, and in the end have a coherent system that encompasses several levels in the modeling (my main interest is precisely to combine the quantitative and the qualitative in the end, I will write about that another time). Second, nobody required that we have a fully functional data model before we started working, as opposed to any project applying for funding for any similar project. And finally – well, I had the coolest team ever!
OpenEdition suggests that you cite this post as follows:
Anne Baillot (September 24, 2014). There might well be a right time to structure one’s data after all. Digital Intellectuals. Retrieved November 9, 2024 from https://doi.org/10.58079/nmve
Recent Comments