A note about Edward Gibson, an important witness involved in prosecuting Jacobite prisoners
To reinforce our recent discussion of critical thinking about the historical data used within a project like JDB1745, this week’s post illustrates an example of that application in action. While looking through some of the published trial records related to government prosecution of the Manchester regiment, team member Bill Runacre found a data conflict that took a bit of detective work to iron out. In the 1816 trial transcript of Captain James Bradshaw, published in Vol. XVIII of Howell’s (or Corbett’s) State Trials, amongst the witnesses who took the stand against the Manchester officer was one Henry Gibson, allegedly a soldier in Elcho’s Jacobite cavalry troop. Some character notes about Gibson are described within the transcript:
Henry Gibson was also produced and sworn, who said, That he himself was unfortunately seduced into the rebel army, and entered into lord Elcho’s troop of horse-guards; that the prisoner, Mr Bradshaw, marched with them as a private man in the said corps; that the troop was drawn up at the battle of Culloden, and that he there saw the prisoner on horseback in the said troop, with pistols, and a broad sword by his side, and a white cockade, and that he continued with the said troop till he was taken prisoner by his royal highness the duke of Cumberland’s army.
Much of Gibson’s testimony against Bradshaw sounds quite similar to that of dozens of other witnesses brought in to inculpate suspected Jacobite prisoners in the years following the failure of the final rising. Pertinent details which the government found most helpful often included firsthand descriptions of the defendant’s presence within the Jacobite army and specific duties in that station, persons of repute with whom they were seen conversing, and the identification of clothing and arms that were worn during their tenure in Jacobite service. The collective depositions by Gibson and those of at least eight other witnesses were enough to condemn James Bradshaw, and he was thus found guilty and subsequently executed in London on 28 November 1746. As it turns out, however, Henry Gibson did not actually exist.
Some of the demographic results of organizing the regiment by parish of origin.
In our previous two posts, we introduced a case study model to demonstrate the utility of JDB1745 and we discussed a possible methodology that will give us more accurate results than what has hitherto been published. Now that we have examined the data’s lineage, established as much objectivity as possible, and implemented authority records in our model of Lord Ogilvy’s regiment, we are ready to take a look at the information and organize it in a way that facilitates the most useful analysis for our needs. We know that our assessment will not be comprehensive, as more sources are revealed and further biographical information is entered into the database. Yet we can take a ‘snapshot’ based upon the data that we do currently have. Here is what the numbers look like:
- Mackintosh’s Muster Roll: 628
- Rosebery’s List: 41
- Prisoners of the ’45: 276
- No Quarter Given: 761
To these, a few further sources can be consulted to add yet more names to the overall collection. A document at the National Library of Scotland, for example, contains another twenty two from Ogilvy’s regiment, and 362 more with no particular regimental attribution. A broadsheet distributed by the Deputy Queen’s Remembrancer from 24 September 1747 furnishes a list of 243 gentlemen who had been attainted and judged guilty of high treason, some of whom had likely marched with the Forfarshire men. Various other documents from NLS and in the Secretary of State Papers (Scotland, Domestic, and Entry Books) at the National Archives in Kew contribute thousands more, as do those from the British Library, Perth & Kinross Archives, Aberdeen City & Aberdeenshire Archives, and dozens of other publicly accessible collections. With a baseline collation of the major published sources regarding Lord Ogilvy’s regiment, buttressed by a few other useful manuscript sources, we have a solid corpus of data to examine.
An example of place-name authority usage within JDB1745.
In last week’s post, we set out to introduce the value of a historical database by thinking critically about historiographical and biographical data related to the Forfarshire Jacobite regiment lead by David Ogilvy in 1745-6. While this may seem like a straightforward prerequisite, a comprehensive survey of both primary and secondary sources that address the constituency of this regiment presents a labyrinthine paper trail that requires us to carefully scrutinize the information heretofore recorded. Getting a firm grasp of this ‘lineage’ of data is essential to upholding the accuracy of what is finally entered into our database.
As we suggested last week, simply copying biographical information from published secondary- and tertiary-source name books or muster rolls is not enough to ensure that the data is accurate or even relevant. In short, this practice is ‘bad history’ and opens up the analysis to errors, inconsistencies, and others’ subjective interpretations of primary-source material. In the effort to combat this, we need a methodology that maintains the integrity of the original sources as much as possible while still allowing us to convert them into machine-readable (digital) format. Part 2 of this technical case study will demonstrate one possible method of doing this.
When we discuss the term ‘clean data’, we are referring to information that is transcribed into digital format with as little subjectivity as possible. This means misspellings and known errors from primary sources are left intact, conflicting evidence from disparate documents is retained, and essentially no liberties are taken by the modern historian or data entry specialist to interpret or otherwise blend or ‘smooth out’ information upon entry. Though it might seem unwieldy to use raw data with so many chaotic variables, it would be fundamentally distorting the results to do otherwise. As long as we take the time to set up an effective taxonomy for transcribing (now) and analyzing (later) our data, the results will be well worth the extra care.