Tag: data (page 1 of 2)

A Snag in the Weft

Embroidered canvas on display at NTS Culloden

Despite the recent, lively debate about the value of the work undertaken by historians, we can agree that many of them generally spend the tenure of their careers involved with research, analysis, and output. Depending upon the era of study and subject matter, dutiful historians will go back to the primary sources when at all possible and critically address the lineage of information as well as its context. Following and challenging that data lineage is something about which I have repeatedly written, and this pursuit represents a significant role in the methodology of my everyday work, as I believe it is necessary in order to produce informed and precise history.

Precise historians will familiarize themselves with as many sources as possible and determine which are most relevant, accurate, and valuable to the arguments which they are asserting. Concurrently, sources that challenge those assertions must also be consulted and may lend valuable perspective to or even transformation of the historian’s original assertions. The honest scholars will admit those changes and influences along the way by showing their work while being as deliberate and precise as possible. Preciseness is not just the end goal, it is absolutely integral to the process. In that way, scholarly history follows a course that rightfully marks it as a social science.

Continue reading

Why the Need for a Jacobite Database? (Part 3)

Some of the demographic results of organizing the regiment by parish of origin.

In our previous two posts, we introduced a case study model to demonstrate the utility of JDB1745 and we discussed a possible methodology that will give us more accurate results than what has hitherto been published. Now that we have examined the data’s lineage, established as much objectivity as possible, and implemented authority records in our model of Lord Ogilvy’s regiment, we are ready to take a look at the information and organize it in a way that facilitates the most useful analysis for our needs.1 We know that our assessment will not be comprehensive, as more sources are revealed and further biographical information is entered into the database. Yet we can take a ‘snapshot’ based upon the data that we do currently have. Here is what the numbers look like:

  • Mackintosh’s Muster Roll: 628 
  • Rosebery’s List: 41
  • Prisoners of the ’45: 276
  • No Quarter Given: 761

To these, a few further sources can be consulted to add yet more names to the overall collection. A document at the National Library of Scotland, for example, contains another twenty two from Ogilvy’s regiment, and 362 more with no particular regimental attribution.2 A broadsheet distributed by the Deputy Queen’s Remembrancer from 24 September 1747 furnishes a list of 243 gentlemen who had been attainted and judged guilty of high treason, some of whom had likely marched with the Forfarshire men.3 Various other documents from NLS and in the Secretary of State Papers (Scotland, Domestic, and Entry Books) at the National Archives in Kew contribute thousands more, as do those from the British Library, Perth & Kinross Archives, Aberdeen City & Aberdeenshire Archives, and dozens of other publicly accessible collections.4 With a baseline collation of the major published sources regarding Lord Ogilvy’s regiment, buttressed by a few other useful manuscript sources, we have a solid corpus of data to examine.

Continue reading

Why the Need for a Jacobite Database? (Part 2)

An example of place-name authority usage within JDB1745.

In last week’s post, we set out to introduce the value of a historical database by thinking critically about historiographical and biographical data related to the Forfarshire Jacobite regiment lead by David Ogilvy in 1745-6. While this may seem like a straightforward prerequisite, a comprehensive survey of both primary and secondary sources that address the constituency of this regiment presents a labyrinthine paper trail that requires us to carefully scrutinize the information heretofore recorded. Getting a firm grasp of this ‘lineage’ of data is essential to upholding the accuracy of what is finally entered into our database.

As we suggested last week, simply copying biographical information from published secondary- and tertiary-source name books or muster rolls is not enough to ensure that the data is accurate or even relevant. In short, this practice is ‘bad history’ and opens up the analysis to errors, inconsistencies, and others’ subjective interpretations of primary-source material. In the effort to combat this, we need a methodology that maintains the integrity of the original sources as much as possible while still allowing us to convert them into machine-readable (digital) format. Part 2 of this technical case study will demonstrate one possible method of doing this.

When we discuss the term ‘clean data’, we are referring to information that is transcribed into digital format with as little subjectivity as possible. This means misspellings and known errors from primary sources are left intact, conflicting evidence from disparate documents is retained, and essentially no liberties are taken by the modern historian or data entry specialist to interpret or otherwise blend or ‘smooth out’ information upon entry. Though it might seem unwieldy to use raw data with so many chaotic variables, it would be fundamentally distorting the results to do otherwise.1 As long as we take the time to set up an effective taxonomy for transcribing (now) and analyzing (later) our data, the results will be well worth the extra care.

Continue reading

« Older posts

© 2024 Little Rebellions

Modified Hemingway theme by Anders NorenUp ↑