Friday, April 28, 2023

About History: Guns 'n' Bullets

The first guns  

The first firearms appeared on the battlefields of Europe while knights and Men-at-Arms were wearing plate armour. Contrary to what some might think, both continued to be used alongside each other for centuries. Scroll forward to the Napoleonic Wars of the 19th century and one will discover French cuirassier were still armoured in back and breast plates that were proofed (tested for protection) against musket balls. So, the idea that the introduction of firearms made the wearing of body armour redundant is a fallacy. While it is true that for the most part soldiers in the large armies of the 17th up to the mid-20th century did not wear body armour, such is now a thing of the past where the mass adoption of ballistic armour is commonplace in armed forces of wealthier countries.

The first gunpowder weapon appeared in China sometime during the 12th century AD but did not see widespread use in the region until the 13th century. These early weapons were essentially strong metal cylinders, plugged at one end to form a breech, that fired projectiles using the explosive pressure of gunpowder. A measured charge of powder, either loose or in bag or similar container, is loaded into the tube from the open end followed by the projectile, typically a ball. Wadding may be used to tamp the gunpowder in the ‘powder chamber’ at the breech end against which the ball is rammed down the barrel. To ignite the main charge, a small ‘vent hole’ (or ‘touch hole’) is drilled at the breech end of the tube. Either fine, loose powder is poured into and onto the vent hole or a piece of prepared fuse is inserted into the hole. Igniting this gunpowder (or fuse) flashes burning powder through the vent hole into the breech setting off the main charge. The pressure generated by the resulting explosion propels the projectile along the barrel at high speed toward the intended target.

Possibly as early as the 13th century, but definitely from the 14th century onward, breech-loading cannon appeared. These were still relatively simple designs, but they allowed gunners to pre-prepare a charge in a removable wrought iron ‘pot’ know as a breechblock. Once loaded the breechblock would be married to the rear end of the gun tube and secured firmly in place with wooden wedges (see below).

Handheld guns

During the Middle Ages, large and small cannons were developed for siege and field battles. The cannon replaced prior siege weapons such as the trebuchet. Alongside the larger cannon were the first ‘handgonnes’ or ‘gonnes’, which became fairly common around AD 1400. These guns were essentially handheld cannons comprising a smaller diameter tube that the user loaded with gunpowder and a ball and lit from the outside. For the hand cannon to develop into a more useful tool required two technological improvements. Firstly, the early hand cannons (below left) were simply held in one hand, perhaps with the wooden stick couched in the armpit (below centre and right) or resting on the shooter’s shoulder. Secondly, the gunner had to ignite the powder charge with a handheld burning match which they manually applied to the vent hole. Neither arrangement guaranteed great accuracy, so firearm design needed to evolve a more ergonomic shape and a more reliable ignition system.

The Complete Gun

The year 1817 saw the introduction of the expression ‘lock, stock and barrel’ to mean ‘whole’ or ‘complete’. It derives from the principal parts of a firearm namely:

•  The lock or firing mechanism (complete with the trigger).

•  The stock, or the wooden parts of the gun (sometime now referred to as the ‘furniture’), that give the firearm its ergonomic shape making it easy to hold.

•  The barrel whether smooth bore or rifled (more of which later).

Stocks often require intricate carving to accept the barrel and the lock. Gunsmiths had to ensure the fit and finish were accurate so that the components functioned correctly and reliably. A complete gun might also include several decorative brass fittings such as a butt plate or the barrel bands securing the stock, as well as a ramrod and a ramrod holder underneath the barrel.

The Matchlock

As mentioned, the lock is the ignition mechanism for a firearm whether it uses gunpowder or a pre-assembled complete cartridge. The matchlock, for example, was simply a piece of slow-burning rope lit ahead of time that when touched to the gunpowder filling the vent hole would ignite the main charge. The first ‘handgonnes’ were fired in this way. In the image above centre, the gunner furthest left is using a linstock to securely hold the match and fire his gun. The linstock’s use has the added safety benefit of placing some distance between exploding gunpowder and the gunner’s hand.

The next significant advance was the ‘arquebus’, a long gun that began appearing in Europe and the Ottoman Empire during the 15th century [1]. The addition of a shoulder stock, priming pan, and matchlock mechanism turned the arquebus into a handheld firearm and the first to be equipped with a recognisable trigger. Now an infantryman armed with an arquebus (the ‘arquebusier’) could attach a slow-burning match to a lever (the ‘serpentine’) which, with the press of his finger, could be rotated downward to touch the match onto the priming pan. The introduction of the early trigger made the matchlock somewhat safer to use and with two hands on the weapon meant a steadier platform improving accuracy. Yet, the matchlock still had several problems:

•  The match still had to be lit ahead of time.

•  Even though slow-burning, the match could burn out if there was a long interval between lighting it and firing the gun.

•  A glowing match can be seen at night.

•  Wet weather could extinguish the match or dampen the gunpowder to such an extent that it would not ignite.

Despite these drawbacks, matchlocks remained in common use for 200 years because they were a better option than lighting gunpowder by hand and they were cheap to build.

The Wheellock

At the time of the English Civil Wars (1642 - 1652) a new form of ignition appeared on the battlefield. Developed in Europe around 1500, wheellock firearms were used alongside matchlocks and later the snaplock (1540s), the snaphance (1560s) and the flintlock (c. 1610s).

The wheellock works by spinning a spring-loaded steel wheel against a piece of iron pyrite to produce intense sparks. As described earlier, the sparks ignite gunpowder in a priming pan which flashes through a vent hole to ignite the main charge in the firearm's barrel. The iron pyrite is clamped in vice jaws on a spring-loaded arm known as a ‘dog’. This rests on the priming pan cover. When the trigger is pulled, the pan cover is opened and the wheel is rotated, with the pyrite pressed into contact creating sparks.

The advantage of a wheellock is that it can be pre-prepared and kept ready for instant firing. Unlike a matchlock which, because it must have a burning cord of slow match ready, demands the shooter's constant attention and two-handed operation, the wheellock can be used single-handedly. Conversely, the complexity of the wheellock mechanism made them relatively costly. Wheellock firearms were used alongside matchlocks until both were replaced by the simpler and less-costly flintlock.

The Flintlock

Flintlock is a general term for any firearm that uses a flint-striking ignition mechanism. The first such weapons appeared in Western Europe in the early 16th century. Gradually they replaced earlier firearm-ignition technologies, such as the matchlock, the wheellock, and earlier flintlock mechanisms such as snaplock and snaphaunce. By the late 17th century, the flintlock had proven easier to manufacture, relatively inexpensive, fairly weatherproof and, most importantly, provided an instant and reliable way of igniting gunpowder in a gun’s chamber. Once again, a spark was needed to ignite the gunpowder in the priming pan to set off the chain reaction that fired the projectile. To create the spark, the flintlock adopted the tried and tested ‘flint and steel’ method that had been used to light fires for centuries. In essence, the idea is straightforward. Flint is an amazingly hard form of rock which, if struck against iron or steel, flakes off tiny particles of the metal. The force of the blow and the friction it creates actually ignites the iron, which burns rapidly to form iron oxide (Fe3O4). These hot sparks of burning iron are all that is needed to ignite gunpowder.

For a flintlock to work requires:

•  A ‘cock’ (hammer) with turnscrew that tightly clamps a sharp piece of flint, itself held in a leather patch.

•  A mainspring to power the hammer.

•  A steel priming pan lid, or frizzen, that the flint strikes to create sparks.

•  A priming pan where a small quantity of finely ground gunpowder waits to be ignited by the hot sparks.

To load, the cock is pulled backward to the ‘half-cock’ position which engages a sear in a safety notch that prevents accidental firing [2]. As before, the shooter loads the gun, usually from the muzzle end, with black powder from a powder flask. This is followed by a round lead ball usually of a slightly smaller diameter than the barrel. Balls were often wrapped in a piece of paper or a cloth patch to ensure a tighter fit and prevent the ball rattling along the barrel when fired. Were the ball to do so, then accuracy is degraded significantly. The ball is rammed down the barrel and tamped on the powder charge with a ramrod that is typically stored beneath the barrel. Wadding between the charge and the ball was often used in earlier guns. The priming pan is primed with a small amount of very finely ground gunpowder and its hinged lid, or frizzen, is closed. The gun is now ‘primed and loaded’.

To fire, the cock is rotated backward once more from ‘half-cock’ to ‘full cock’, releasing the safety sear. The shooter raises the firearm to rest the butt in his shoulder and takes aim. Pulling the trigger releases the sear and sear spring engaged in the ‘tumbler’. This causes the tumbler to release the power of the mainspring which is in turn transmitted to the cock [3]. Once released, the cock holding the flint snaps forward striking the spring-loaded frizzen [4] covering the priming pan, opening it and exposing the priming powder. Contact between the flint and frizzen produces a shower of sparks that are directed into the priming pan. The sparks ignite the gunpowder which flashes through a small vent hole (or touch hole) in the barrel that leads to the combustion chamber where it ignites the main powder charge, and the gun fires.

The Percussion Cap

Flintlock firearms continued in common use for over two centuries until eventually replaced by percussion cap in the early-to-mid 19th century and later by cartridge-based systems. The percussion cap was made possible by the discovery of a chemical compound called mercuric fulminate or fulminate of mercury (Hg(ONC)2) a compound of mercury, nitric acid and alcohol. Mercuric fulminate is extremely volatile and shock sensitive such that a sharp blow can cause it to detonate. By putting a small amount in a tiny cup (about the size of a pencil eraser; see below right) known as a cap and affixing this to a nipple, which has a tube leading into the barrel, the cap’s detonation can ignite gunpowder in the barrel.

The percussion lock is identical to the flintlock in terms of the mainspring, hammer, tumbler, sear and sear spring making it easy to convert flintlocks to the new mechanism. As with the flintlock, the hammer can be uncocked, half-cocked or fully cocked, but what the percussion lock does not have is the flint and frizzen. Instead, hammer is shaped to strike the cap on the nipple. This made the percussion lock easier to load, more weather resistant and more reliable meaning that by the time of America's civil war, both Union and Confederate armies used percussion-cap guns. Even so, the percussion lock did not last very long, perhaps 50 years. Firearms technology and manufacturing processes were developing rapidly, and it quickly became possible to integrate the cap, powder and projectile into a single metal cartridge at low cost.

Cartridges

The first cartridges appeared in the second half of the 16th century. These were charges of gunpowder wrapped in paper; the projectile, a lead ball, was loaded separately. During the next century, methods of combining the ball with the powder were devised. In muzzle-loading a musket, the shooter bit off the end of the paper cartridge, poured a small amount of the powder into the priming pan, then poured the rest down the barrel. The ball, inserted into the barrel followed by the paper cartridge as wadding, was rammed down onto the powder at the breech end. In the 19th century breech-loading rifles and various multi-shot weapons were introduced. These necessitated loading an entire cartridge as one unit yet still required an external spark to ignite the propellant. 

In 1846 a Paris gunsmith, Benjamin Houllier, patented an improved design for the pinfire cartridge, capable of being fired with a strike from the gun’s hammer. In one type, a pin was driven into the cartridge by the hammer action; in the other, a primer charge of mercuric fulminate was exploded in the cartridge rim. Later improvements changed the point of impact from the rim to the centre of the cartridge, where a percussion cap was inserted. The cartridge with a percussion cap, or cup, centred on the base of the cartridge (known as ‘centrefire’) dominates larger calibres, but rimfire cartridges remains popular in small-bore, low-powered ammunition, e.g., .22 calibre. The invention and introduction of smokeless nitrocellulose powder in the late 19th century replaced black powder as the favoured propellant. With this change came the birth of the modern cartridge, which typically integrates the primer, propellant, and a projectile (bullet) into each metal case (usually brass, but sometimes steel is used).

Gun Barrels

The first cannon barrels were made by forming several longitudinal staves into a tube by beating them around a mandrel (a long rod of the desired diameter) and forge welding them together. The tube was then reinforced with a number of rings or sleeves - in effect, hoops similar to those used in coopering wooden barrels, hence the shared name. The hoops were forged with an inside diameter about the same as that of the outside of the tube. Each hoop was raised to red or white heat, then slid into place over the cooled tube where they were held firmly in place by thermal contraction. The sleeves or rings were butted against one another and the gaps between them sealed by a second layer of hoops.

Wrought-iron breechloaders  One particular problem was forging a strong, gastight breech. One solution was to weld a tapered breech plug between the barrel staves. Partly because of the difficulties of making a long, continuous barrel, and partly because of the relative ease of loading a powder charge into a short breechblock, gunsmiths soon learned to make cannon in which the barrel and powder chamber were separate. As the charge and projectile were loaded into the rear of the barrel, these were called ‘breechloaders’. As shown above, the breechblock was mated to the barrel by means of a recessed lip at the chamber mouth. Before firing, it was dropped into the stock and by hammering a wedge behind it was forced forward against the barrel’s chamber. After the weapon was fired, the wedge was knocked out and the breechblock removed for reloading. This scheme had significant advantages, particularly in the smaller classes of naval swivel guns and fortress wall-pieces, where the use of multiple breechblocks permitted a higher rate of fire.

Cast bronze muzzle-loaders  The advantages of cast bronze for constructing large and irregularly shaped single piece objects were well understood from sculpture and bell founding. As a process for making gun barrels a number of problems had to be overcome. First off, bronze is an alloy of copper and tin that foundries had to ensure could withstand the shock and huge pressures involved in firing. Early techniques produced bronze alloys that were prone to internal cavities and ‘sponginess’, which was not much of a problem for casting statues but for cannons could result in catastrophic failure. By the first decades of the 15th century, and, by the 1420s and ’30s foundry practices had developed sufficiently to overcome the technical problems of the inherent deficiencies of the metal.

Handheld firearms  By the time flintlocks appeared, blacksmiths would beat a flat piece of iron into a cylinder around a mandrel. Heating the iron to a high enough temperature in a forge meant the blacksmith could forge weld the seam along the length of the barrel to form a strong tube, a process that could take days. Drilling out the tube with successively larger bits and then polishing with a reamer creates a smoothbore barrel, which could range in length from 15 to 30 cm (6 to 12 inches) for pistols and 102 to 152 cm (40 to 60 inches) for long guns.

Earlier pistols and muskets (arquebus, matchlock and flintlocks) were ‘smoothbore’ which, as the name implies, were smooth along the entire length of the barrel. Most, but not all, shotguns are made this way as was the British Army’s famous ‘Brown Bess’ smoothbore flintlock musket [5]. The problem with these firearms is their lack of accuracy at range. In 1811, for example, a test of accuracy conducted in London produced the following results: at 100 yards (91 m) the musket hit the target 53% of the time. At 200 yards (180 m) the number of hits had dropped to 30%, and at 300 yards (270 m) only 23% hits were achieved. Even so, the accuracy of the Brown Bess was in line with most other smoothbore muskets of the 18th and 19th centuries.

Rifling a barrel is a way of increasing the accuracy of the bullet, whether spherical or cone shaped. Starting with a smooth bore barrel, helical grooves are machined into and along the internal (bore) surface of a gun's barrel. As the bullet speeds down the barrel it engages with the rifling’s lands and grooves which impart a spin to the projectile around its longitudinal axis. This has the effect of improving the projectile’s aerodynamic stability and accuracy over smoothbore designs. Straight grooving had been applied to small arms from at least 1480, but these were originally intended as ‘soot grooves’ to collect gunpowder residue. True, helical rifling dates from the 16th century, but gunsmiths had to engrave the grooves by hand making it a laborious and expensive manufacturing process. Consequently, rifling did not become commonplace until the mid-19th century. Even so, rifles were not popular with military users still muzzle-loading their firearms because they were difficult to clean, and loading projectiles presented numerous challenges. If the bullet was of sufficient diameter to engage the rifling, then ramming it down the bore became much harder. Yet, if the diameter were reduced to make loading easier, then the bullet would not fully engage the rifling and accuracy was reduced.
Endnotes:

1. The exact dating of the matchlock's appearance is disputed. It could have appeared in the Ottoman Empire as early as 1465 and in Europe a little before 1475.

2. Were the safety sear to fail then the gun could go off ‘half-cocked’ and unexpectedly fire.

3. The mainspring presses against the tumbler and can rotate the hammer with a great deal of energy. The sear engages the tumbler when the gun is cocked and resists the force of the mainspring. When the trigger is pulled, it pushes the sear enough to release the tumbler and allows the hammer to drive the flint forward.

4. The frizzen spring securely holds the cover attached to the frizzen over the priming pan. This arrangement has two benefits: firstly making the flintlock more weatherproof, and secondly, reducing the risk of the priming powder accidentally igniting - literally a ‘flash in the pan’.

5. ‘Brown Bess’ is a nickname of uncertain origin for the British Army's muzzle-loading smoothbore flintlock Land Pattern Musket and its derivatives. The musket design remained in use for over a hundred years with many incremental changes in its design. These versions include the Long Land Pattern, the Short Land Pattern, the India Pattern, the New Land Pattern Musket and the Sea Service Musket.

Wednesday, April 26, 2023

Horrible History Costume: Movie Armour

Introduction  What follows was inspired by a @HistoryFilmClub tweet shown right. Like many who responded, naming just one historical inaccuracy in a film or TV show proved far too difficult. Sadly, and contrary to the claims of directors, producers, costume designers et al., far too many historically themed media productions are beset with inaccuracies. Not wishing to be unreasonably critical, we thought there was an opportunity to highlight some of the more common errors and then counter them with whatever historical evidence exists. In this way we hope to learn something, but there are some caveats to be born in mind:

  • We know films and TV dramas are fictional, whether they claim to be ‘based on true events’ or not. Yet that does not always excuse the liberties taken with characters, timelines, locations, costume, technology, props, action sequences (especially fight scenes), and a whole lot more.

  • That said, ‘errors’ are clearly excusable if a production is rooted in the fantasy genre, is not claiming 100% historical accuracy, or is not a factual documentary.

  • However, where inaccuracies appear, especially in historical documentaries, we think it only fair to point them out because they mislead the audience.

  • And finally. we are well aware from our experience advising filmmakers and from being on set that liberties are sometimes taken due to production constraints.
Body armour
  So, with that in mind, what can we ‘learn from mistakes’ with depictions of body armour? Where to start? There are so many historically themed films and TV shows where the costume designers clearly have been allowed or encouraged to let their imaginations loose when it comes to armour. Look at an image of Russel Crowe from the film ‘Gladiator’ (right). While the overall impression is great, armourers, experts and connoisseurs of Roman history will note some weirdness going on. The highly decorated lorica musculata (‘muscle cuirass’) looks the part for an upper-class, wealthy Roman general. Even the pteruges (‘feathers’ or ‘wings’) protecting the upper arms and waist are fine even if a bit wider than most ancient depictions, and only in a single layer where more than one was often the norm (see below). However, both the sculpted versions and the 4th century BC original pictured below reveal that the large articulated plates at the shoulder [1] are wholly inaccurate. In this instance the costume designers have taken a perfectly good representation of a musculata and added the shoulder plates from the archetypal Roman soldier’s (not officer’s) lorica segmentata [2], something for which we have, as yet, no evidence historically or archeologically.
    Yet, this ‘Frankenstein’ armour is plausible. The film is set in the late 2nd century AD, so we have evidence that both types of cuirasses existed. What is more, General Maximus would have had the social standing, power and prestige, and be wealthy enough to have had bespoke armour made. Few people would have been in a position to tell me him ‘no’. Besides, the design would have functioned incredibly well since all that has been done is replacing the banded torso plates of lorica segmentata with the two-piece back and breast plates of a musculata. So, although not historically correct, Maximus’ armour is plausible. What cannot be forgiven, however, are the wrist guards.
      Wrist guards
        Romans wearing wrist guards is a classic and persistent nonsense in film and on TV. In the screenshot (right) from the HBO series ‘Rome’ (2005) both Ray Stevenson and Kevin McKidd are prominently wearing wrist guards. There is, however, no contemporary pictorial evidence for these things, nor indeed has there been any archaeological find identified as 'wrist guards' whether such items were made of cloth, leather or metal (assuming something had indeed survived the intervening 2,000 years). These irritatingly inappropriate costume accessories were introduced, it is believed, in the early years of film-making to disguise the inconvenient fact that most actors wore wrist watches but when removed this meant their tan lines were immediately obvious. It appears the simplest solution by costumiers was to disguise the untanned skin with some form of wrist covering. Thus, the wholly inaccurate ‘wrist guard’ was born. Yet, despite there being no evidence for them and the repeated guidance offered to costume designers by history experts, these ‘things’ continue to feature in Roman period dramas.
        Leather armour
           Costume designers and makers frequently default to using leather as a relatively cheap and easy material to create a character’s body protection. In a fantasy setting like ‘The Lord of the Rings’, ‘Game of Thrones’ or ‘The Witcher’ it matters little - little that is until one considers leather armour’s questionable protective qualities. What is bothersome is the studded leather ‘fetish gear’ that all too often ends up adorning a character in a historical piece. Pictured above right is Australian actor Travis Fimmel playing the semi-mythical Ragnar Lothbrok in the TV series ‘Vikings’ (2013-2020). What he is wearing is typical of the costume armours created for shows of this genre. There are elements with historical precedent, but this garment of ring mail sewn onto a thin leather cuirass, complete with rivetted patches, is just odd. Why not equip our hero with a ¾ length, sleeved, mail byrnie? There are several possible reasons why film-makers might not:
        • As shown right, a shirt of mail is made of thousands of metal rings, typically iron, joined so that each individual ring is linked to at least four others. Surviving examples exhibit rings of a solid ‘washer’ type punched from a sheet of metal or made from wire whose open ends were either welded or rivetted together. To make a shirt, four open-ended rings are linked to a closed ring (either welded or rivetted shut) before they too are closed with rivets. Rows of additional rings are added using the same technique to produce a full shirt of mail. But to produce an accurate mail shirt is both labour intensive and time consuming, and ‘time is money’.
        • If each mail shirt is made from metal, then with the labour and time factors thrown in, production becomes expensive. The cost might be acceptable for one or two shirts to be used in close-ups or by the lead actors only, but prohibitive if hundreds of shirts are needed.
          • Accurate mail shirts are heavy and actors unfamiliar with the weight will tire quicker, possibly risk of injury, and their performance may be adversely affected.
          To overcome these problems, for the Lord of the Rings film trilogy, Wētā Workshop made realistic looking lighter weight PVC mail for both the lead actors and for the hundreds of extras that appeared throughout the films. Much cheaper than metal, PVC pipe was cut into rings, assembled by hand into a semblance of armour, and then electroplated. A total of 82.9 million links were manufactured from 7 miles of PVC pipe. Notably, metal mail shirts were also used, albeit sparingly because of their weight, for close-up filming where the appearance of plastic rings was distinguishable.
            To be fair to the leatherworkers on ‘Vikings’, they did create some brilliantly intricate designs to clothe the different actors and actresses throughout its six seasons. What a shame, though that all their skill and hard work to portray ‘real’ or lifelike Vikings is just fantasy. For those pointing out that ‘Vikings’ was not a historically accurate production, you are quite correct. Yet, this type of fantastical leatherwork, once made, has a tendency to be recycled into other shows including documentaries, and it is in the latter where one ought to expect better production standards. Even so, leather was used as armour.
              Cuir bouilli
                Meaning ‘boiled leather’ in French, cuir bouilli was a common material used for various purposes in the Middle Ages and Early Modern Period. It was leather that had been treated to became toughened and rigid, as well as able to hold moulded decoration. It was the usual material for the robust carrying-cases that were made for important pieces of metalwork, instruments such as astrolabes, personal sets of cutlery, books, pens and the like.
                Cuir bouilli has been used since ancient times, especially for shields, in many parts of the world. Leather does not survive long burial, however, so excavated archaeological evidence for it is rare. A few examples of Roman horse armour in cuir bouilli have been discovered and preserved. Typically these are chamfron designed to protect the horse’s forehead, nose and eyes. Likewise, an Irish shield of cuir bouilli with wooden formers, deposited in a peat bog, has survived for some 2,500 years. The technique was commonly used in the Western world for helmets; the pickelhaube, the standard German helmet (a Bavarian version is pictured right), was not replaced by the stahlhelm (‘steel helmet’) until 1916, in the middle of World War I. As leather does not conduct heat the way metal does, firemen continued to use boiled leather helmets until World War II, and the invention of strong plastics.
                  In the Mediæval period, cuir bouilli was also used for some body armour, being both much cheaper to manufacture larger pieces suitable for breastplates and much lighter to wear than plate armour [3]. Indeed, the latter was too expensive for most soldiers to afford so its use was largely reserved for the wealthy armoured horsemen or men-at-arms (‘knights’). Unsurprisingly, cuir bouilli was much less effective than iron/steel armour at withstanding direct blows from bladed weapons, hammers, maces, or later gunshot. It could be reinforced against slashing cuts with the addition of metal bands, strips or splints, however. Modern experiments on cuir bouilli armour have shown it can reduce the depth of an arrow wound considerably.
                    Mail  As described above, mail is a type of armour consisting of small metal rings linked together to form a flexible mesh. It was in common military use between the 3rd century BC and the 16th century AD in Europe, and longer in Asia and North Africa. A coat of mail is often referred to as a hauberk, and sometimes a byrnie. Mediæval sources referred to armour of this type simply as mail (maille, maile, male, or other variants). In 1786 Francis Grose's ‘A Treatise on Ancient Armour and Weapons’ introduced the term ‘chainmail’, a term now commonly (but erroneously) used thanks to it being popularised by Sir Walter Scott's 1822 novel ‘The Fortunes of Nigel’.
                      In some films, knitted mail spray-painted with a metallic paint is still used instead of actual metal to cut down on cost. The small budget film ‘Monty Python and the Holy Grail’ is a perfect example where knitted ‘armour’ adorns the actors. More recently, films striving to achieve more costume accuracy often use PVC rings, once again to reduce costs and the armour’s overall weight as mentioned before.
                        Extra padding  Mail armour provided an effective defence against slashing blows from edged weapons and some forms of penetration by thrusting and piercing weapons. A study conducted at the Royal Armouries in Leeds concluded that ‘it is almost impossible to penetrate using any conventional medieval weapon’ [6]. The degree of protection offered by mail to weapons is determined by four factors: the linkage type (riveted, butted, or welded), the material used (iron versus bronze or steel), the weave density (a tighter weave needs a thinner weapon to penetrate), and the ring thickness (generally ranging from 18 to 14 gauge (1.02 to 1.63 mm diameter) wire in most examples). Where mail is not riveted, a thrust from most sharp weapons could penetrate by forcing the rings open. When mail is riveted, however, only a strong well-placed thrust from certain spears, or a thin-bladed or dedicated mail-piercing sword can penetrate it. However, the projectiles from weapons such as war-bows and crossbows can, in nearly all cases, overcome mail.
                          One of the advantages of mail is its flexibility, but this means that a blow from a poleaxe or halberd, or blunt weapons such as maces and warhammers can often injure the wearer without needing to penetrate the armour. Blunt force trauma can result in serious bruising, painful and debilitating fractures, or potentially fatal concussion if the head is struck. Mail-clad warriors therefore typically wore padded armour, such as gambeson, beneath the hauberk to cushion blows, and separate rigid helms over mail coifs for increased head protection.
                            Such padding also has the added benefit of preventing or reducing the chance of the metal rings being driven into the wearer’s body. Why then do we often see actors wearing just a mail coif over their head and no additional helmet? A case in point is the example pictured (right) from Netflix’s 2019 movie ‘The King’. Agreed, they are only actors and its make-believe, but there is a very good reason for wearing all these additional heavy, hot and tiring layers of protection - so you don’t die! Imagine the horrendous injuries possible should a weapon strike the man wearing just a mail coif and the rings are driven into that person’s fractured skull. So, if filmmakers want to claim historical accuracy and portray fighting men as they would have dressed for battle, then wearing a padded coif under a mail coif should be a ‘no brainer’.
                            And finally…This has either been a rant on some pet peeves with media representations of historical themes or, hopefully, some food for thought. Regardless, thank you for reading this far. Until next time, bon appétit.
                              References:
                                Bishop, M.C., (2002), ‘Lorica Segmentata Volume I: A Handbook of Articulated Roman Plate Armour’, JRMES Monograph 1, Braemar: The Armatura Press.
                                  Endnotes:
                                    1. In Mediæval parlance these would be called ‘pauldrons’.
                                      2. We do not know what the Romans named this type of armour. Lorica segmentata, meaning ‘segmented cuirass’, was first used by scholars in the 16th-century. Dr Mike Bishop points out: ‘Lorica (‘body armour’ or ‘cuirass’) is obvious, but the qualifying epithet has not survived. A reasoned guess has been made at lorica lam(m)inata, based on the use of lamina to describe a sheet of metal’ (Bishop, 2002, 1).
                                        3. A full set of plate armour was made from tempered steel to completely encase the man from head to toe. Despite weighing around 15 kg to 25 kg (33 lb to 55 lb), the weight was distributed across the whole body. Plate armours were articulated allowing the wearer to remain highly agile so they could run, jump, and otherwise move freely (being winched onto a horse is myth!).

                                        Wednesday, April 19, 2023

                                        Horrible History Costume: Hair

                                        Introduction  What follows was inspired by a @HistoryFilmClub tweet shown right. Like many who responded, naming just one historical inaccuracy in a film or TV show proved far too difficult. Sadly, and contrary to the claims of directors, producers, costume designers et al., far too many historically themed media productions are beset with inaccuracies. Not wishing to be unreasonably critical, we thought there was an opportunity to highlight some of the more common errors and then counter them with whatever historical evidence exists. In this way we hope to learn something, but there are some caveats to be born in mind:

                                        • We know films and TV dramas are fictional, whether they claim to be ‘based on true events’ or not. Yet that does not always excuse the liberties taken with characters, timelines, locations, costume, technology, props, action sequences (especially fight scenes), and a whole lot more.

                                        • That said, ‘errors’ are clearly excusable if a production is rooted in the fantasy genre, is not claiming 100% historical accuracy, or is not a factual documentary.

                                        • However, where inaccuracies appear, especially in historical documentaries, we think it only fair to point them out because they mislead the audience.

                                        • And finally. we are well aware from our experience advising filmmakers and from being on set that liberties are sometimes taken due to production constraints.

                                        Hair  So, with that in mind, what can we ‘learn from mistakes’ connected with hair in historically based creations. It should be noted from the outset that modern productions place far more emphasis on getting the look and feel of period piece correct. Efforts are made to ensure hairstyles, for example, reflect the historical period, the social status of the individual, or perhaps a particular religious doctrine. Just occasionally, however, styling decisions are taken that deviate from historical accuracy. The most common ones are actresses sporting loose, long flowing locks or the complete opposite, close-cropped hair, in periods where to do so would have broken the cultural conventions of the time or would have been wholly inappropriate given a character’s status or social position.

                                        Haircuts  Hair grows more or less at the same rate regardless of gender but, in much of western Europe, men have tended to wear their hair shorter and less styled than women. Surprisingly, it is not clear why that has been the case, although it could simply be down to what was considered acceptable at one point in time becoming a long-standing social convention. As far back as the Roman era, it was popular for women to wear their hair long with a distinct parting, but any man openly taking care of his lustrous locks was frowned upon. Roman soldiers noticeably followed this norm by keeping everything short and manageable. In fact, on campaign where washing facilities are limited, hygiene potentially compromised, and an increased risk of body lice, then short hair makes eminent sense. With a few exceptions over the centuries, men have largely adhered to this notion even if the fashion was for elaborate wigs as in the 18th century. For most of this country’s history, middle- and upper-class women, in Britain (and across its empire), let their hair grow long into adulthood. In contrast, it was not uncommon for poverty-stricken women to cut off their tresses and sell their hair to wig makers as a much needed source of income. The fashion for letting hair grow long and for elaborate coiffures began to change in the post-Edwardian first quarter of the 20th century. When women were first deployed as drivers and nurses nearer to the front-line trenches in the Great War of 1914-18, they too began to cut their shorter for the very same reasons of hygiene and cleanliness.

                                        Hairstyles also changed through necessity. During the Great War we begin to see a fashion for shorter hair becoming popular as more and more women went to work in factories. This trend continued in the ‘Roaring 20s’ and the age of the flapper, reflecting how social norms were altered quite radically. By the 1940s hairstyles such as the Victory Roll popularised by the actress Veronica Lake (below) were no longer all about glamour. It involved wearing hair up - very important during World War Two when so many women were once again operating factory machinery in support of the war effort.


                                        Covering the head  The wearing of hats or head coverings such as scarves by wartime female labourers, such as the ‘Muntionettes’ of World War One shown right, made eminent practical sense. It kept hair cleaner in dirtier environments and for safety’s sake ensured hair would not be snagged in machinery. For some head coverings continue to play a significant role in several different religions. While some commentators draw attention to Islam, often with negative connotations, Muslim women who wear the Hijab are far from alone as both Catholicism and Orthodox Judaism also have a long tradition of covering the head.

                                        In fact, women covering their heads and their hair has much longer antecedents than either Catholicism or Islam. In many ancient societies, for example the Greeks and Romans, a woman out in public without her head covered, or with loose, long flowing hair, was a sign of impropriety - loose hair, loose woman. An example quoted by Sebesta & Bonfante (2001) serves to illustrate the point. Sulpicius Gallus, a Roman consul in 166 BC, ‘divorced his wife because she had left the house unveiled, thus allowing all to see, as he said, what only he should see’ (Sebesta & Bonfante, 2001, 49). This should hardly seem an alien notion because as previously mentioned it is still considered appropriate in certain social groups today, especially as part of religious doctrine.

                                        More than that, taking the Romans as our touchstone, much like today a woman’s hair was an expression of personal identity. Hairstyles were determined by a number of factors, namely gender, age, social status, wealth and profession, but how one dressed one's hair was an indication of a woman’s status and role in ancient Roman society. What is more, the Romans were not alone in thinking hair was a very erotic area of the female body and thus a marker of a woman’s attractiveness. Consequently, it was deemed appropriate for a Roman woman to spend time on her hair to create a flattering appearance. Hairdressing and its necessary accompaniment, mirror gazing, were seen as distinctly feminine activities, although it should be remembered that hairstyling was clearly the leisure pursuit of the cultured, elegant woman. Simply having the time to style her hair in fashionably complex and unnatural hairstyles indicated wealth and social status. Not for the Romans were modern-day hairstyles reflecting comfort and naturalism. Indeed, a 'natural' style was associated with uncultured barbarians, who the Romans believed had neither the money nor the sophistication to create these styles. The association with barbarians was also a reason why Roman men kept their hair cut short. 

                                        Ooh, Matron  Despite the desire for elaborate hairstyles emphasising her attractiveness, women in ancient Rome, such as Sulpicius Gallus’ wife, were still restricted by their society’s values. As mentioned, it was deemed scandalous for ancient Greek or Roman women to be seen in public without their heads, and hair, uncovered. So, in the stills shown below we are presented with three actresses portraying Roman ‘matrons’, more of which later. The top left image of Polly Walker [1] from the HBO series ‘Rome’ (2005 - 2007) is a good representation of how one would expect a high-born, cultured Roman woman to dress in public. Moving clockwise, the second image is from the Sky Atlantic series ‘Domina’ and has the actress Kasia Smutniak wearing a veil, presumably pinned to her hairdo [2]. There are a few noticeable problems with this look. Firstly, this style of veil has more in common with representations of women in ancient Greek art but not Roman. Secondly, as described above Roman women of noble birth would have paid a great deal more attention to dressing their hair than the loose hairstyle depicted. Admittedly ‘Domina’ is set in the Roman Republican era so the myriad of elaborate coiffured hairstyles of the later Imperial period that are typically returned on internet searches would not be appropriate either. Thirdly, the production’s stylists might argue that this type of headdress reflects the flammeum, the veil worn by Roman brides on their wedding day, but that does not really stand up to historical scrutiny. For starters Roman ritual dictated that the flammeum was always a yellow-red (luteum) colour to protect the bride as she passed from the protection of her family’s lares [3] to her husband’s (Sebesta & Bonfante, 2001, 48). More significantly, the flammeum totally covered the bride’s hair and face, and indeed much of her body; the veil depicted clearly does not. Curiously wearing a veil in this manner is not represented in contemporary Roman art or sculpture yet they frequently appear in the arsenal of many studio wardrobe departments. Much like the ‘Hollywood’ toga, more about which here, it seems most likely that this type of veil is an active choice by costume designers to simplify an actress’ outfit. The benefits are that simple cloth veils are easier to wear and avoid the more complex, time-consuming drapery of the palla, which is essentially the female version of the male toga. The final image from director Ridley Scott’s film ‘Gladiator’ (2000) shows Lucilla [4], the sister of Emperor Commodus played by Connie Neilson, in the arena of the Flavian Amphitheatre (‘Colosseum’). Bearing in mind all that has been discussed so far, and ignoring the historically incorrect off-the-shoulder dress, why would a woman of the royal family appear in public without her hair and head covered? It is wholly inappropriate - but it’s just the movies, right?

                                        So, what is appropriate?  What all these actresses have in common is that they are all depicting Roman ‘matrons’ (Latin: sing. matrona; ‘mother’), a title that exemplified certain ideals and status in ancient Roman society. From contemporary sources a matron was freeborn and either married or a widow, but beyond that the concept of a ‘matron’ is rather hard to define. Every Roman implicitly understood what being one signified. Yet, we are aware that matrons dressed in a certain distinctive style according to the ideals of the time. The clothing worn ‘signified her modesty and chastity, her pudicitia (Sebesta & Bonfante, 2001, 48). During the late Republic and throughout the life of the Empire a matron generally wore an ankle length tunic covered by a long woollen dress known as a stola. Significantly for the current discussion, her hair would be bound with woollen bands (Latin: vittae, sing. vitta) to protect her from impurity and as an indication of her modesty. When she went out, she would add a palla (see right) a mantle draped over the shoulders and often over the head as well very much in the ‘Middle Eastern custom of veiling women’ (Sebesta & Bonfante, 2001, 48). In ancient Roman society this was ‘a symbol of their honour and of the sanctity and privacy of family life’ (Sebesta & Bonfante, 2001, 48). Young girls, prostitutes, and those who had forfeited the title of matron, usually through being caught in adultery, were not permitted to wear either the stola or palla. Young girls instead wore tunics (Latin: tunicae, sing. tunica), while prostitutes and women convicted of adultery were required to wear togae (Latin: sing. toga) (Kittell-Queller, 2014); further information on this iconic Roman garment is available here.

                                        And finally…

                                        This has either been a rant on some pet peeves with media representations of historical themes or, hopefully, some food for thought. Regardless, thank you for reading this far. Until next time, bon appétit.

                                        Endnotes:

                                        1. Polly Walker portrayed a fictionalised version of Atia the daughter of Julia Minor, sister of Gaius Julius Caesar. Her father was the praetor Marcus Atius Balbus. She had at least one younger sister, and possibly an older one. Due to this, she is sometimes called Atia Secunda or Atia Balba Secunda. She was an influential high-born Roman woman. Not only was she the niece of Caesar, she was also the mother of Gaius Octavius, Caesar’s adopted son and heir, who became the Emperor Augustus. Furthermore, through her daughter Octavia, she was also the great-grandmother of Germanicus and his brother, Emperor Claudius.

                                        2. The series ‘Domina’ (Latin: ‘lady’ or ‘mistress of the house’) charts the rise to power of Livia Drusilla (30th January 59 BC - AD 29), the daughter of Roman senator Marcus Livius Drusus Claudianus and his wife Alfidia. In 38 BC, having divorced her first husband, she married Gaius Octavius (also known as Octavian. When the Senate granted Octavian the title Augustus in 27 BC, effectively making him emperor. Livia became the Roman empress. In this role, she served as an influential confidant of her husband, although also rumoured to have been responsible for the deaths of a number of Augustus' relatives.

                                        3. Lares were guardian deities in ancient Roman religion. Their origin is uncertain; they may have been hero-ancestors, guardians of the hearth, fields, boundaries, or fruitfulness, or an amalgamation of these. Lares were believed to observe, protect, and influence all that happened within the boundaries of their location or function. The statues of domestic Lares were placed at the table during family meals; their presence, cult, and blessing seem to have been required at all important family events.

                                        4. Annia Aurelia Galeria Lucilla or Lucilla (March 7th, AD 148 or 150 – AD 182) was the second daughter of Roman Emperor Marcus Aurelius and Roman Empress Faustina the Younger. She was the wife of her father's co-ruler and adoptive brother Lucius Verus and an elder sister to later Emperor Commodus. Commodus ordered Lucilla's execution after a failed assassination and coup attempt when she was about 33 years old.

                                        Wednesday, April 12, 2023

                                        The Recipes: Ostrich Eggs

                                        At just under 2 kg, a single Ostrich egg weighs about the same as a whole chicken and remarkably is the equivalent of 24 normal-sized chicken eggs. If you can obtain one, cook it and open it, then one Ostrich egg can feed up to 10 people.

                                        A typical egg is 200 mm tall, with a circumference of c. 450 mm, provides 2,000 calories and 144 g of protein. The latter is three times an adult’s recommended  daily protein allowance but Ostrich eggs also contain calcium, iron and vitamin A while being lower in cholesterol and saturated fat than chicken eggs. However, at 4 mm thick, the smooth, shiny shell is so tough that cracking an egg open takes more than the back of a spoon - much more.

                                        From experience, we have actually soft-boiled an Ostrich egg in what might called a parody of an ancient Roman recipe for ‘soft eggs in a pine kernel sauce’ (Apicius, 7.13.3). The details behind that particular escapade, and the Apician recipe itself, can be found here. Suffice to say, soft-boiling an Ostrich egg took about 45 - 50 minutes on a rolling boil (in a large cauldron over an open fire).

                                        When it comes to eating, ostrich eggs can be made into an omelette, fried, scrambled, poached, or hard boiled if you have a spare 90 minutes. Remember, however, that these eggs are big, so everything is super-sized. You will need large catering size pans to cook (and cool) the egg, and utensils robust enough to manage the egg’s weight. Even opening an Ostrich egg is a challenge needing either a hammer to (carefully) break in or a saw to cut through the shell. And for boiled eggs, you will need a large dish or bowl to double as an egg cup. So, with the various pitfalls in mind, what follows are some simple (relatively speaking) ways to cook Ostrich eggs.

                                        One final note, in the UK the Ostrich laying season runs from April to August. At the time of writing, and obviously when in season, Waitrose & Co are selling ‘Clarence Court’ Ostrich eggs for £19.99 each.

                                        Bon appétit!

                                        Wednesday, April 05, 2023

                                        Horrible History: Lighting the way

                                        Introduction  What follows was inspired by a @HistoryFilmClub tweet shown right. Like many who responded, naming just one historical inaccuracy in a film or TV show proved far too difficult. Sadly, and contrary to the claims of directors, producers, costume designers et al., far too many historically themed media productions are beset with inaccuracies. Not wishing to be unreasonably critical, we thought there was an opportunity to highlight some of the more common errors and then counter them with whatever historical evidence exists. In this way we hope to learn something, but there are some caveats to be born in mind:

                                        •  We know films and TV dramas are fictional, whether they claim to be ‘based on true events’ or not. Yet that does not always excuse the liberties taken with characters, timelines, locations, costume, technology, props, action sequences (especially fight scenes), and a whole lot more.

                                        •  That said, ‘errors’ are clearly excusable if a production is rooted in the fantasy genre, is not claiming 100% historical accuracy, or is not a factual documentary.

                                        •  However, where inaccuracies appear, especially in historical documentaries, we think it only fair to point them out because they mislead the audience.

                                        •  And finally. we are well aware from our experience advising filmmakers and from being on set that liberties are sometimes taken due to production constraints.

                                        Lighting the way  So, with that in mind, what can we ‘learn from mistakes’ connected with lighting in historically based creations. Wall sconces and torches are favourite lighting motifs in films set in the ancient and Mediæval eras. There is one major problem with most depictions however: the settings are usually, but not always, too well lit. In Mediæval castles, for example, far more reliance was placed on windows to admit natural sunlight but, contrary to film and TV, ancient halls and castles generally would have been quite dark. Torches were not lit every ten metres along corridors, nor were rooms flooded with candlelight. Indeed, the idea that rooms were lit all the time is most certainly a ‘Hollywood-ism’. Rather, if you wanted illumination, most people carried it with them in the form of a candle, a rushlight or an oil lamp.

                                        If you want to experiment in your own home to see how effective such things are, then get a candle. For authenticities sake, peasants would have used ones made from animal fat, known as tallow, which was smoky. The nobility and clergy on the other hand preferred beeswax that burned brighter and cleaner. Regardless of your chosen medium, put the candle in some sort of holder, light it, then turn off all the modern lights. Allow your eyes a minute or two to adjust and you will find that you can move about your unlit home with perfect confidence, by the light of a single candle. You may also discover that the light given off by candles, tapers, rushlights or oil lamps is a much warmer glow perfect for inducing a more mellow ambience. All of which leads us to how film-makers use different lighting methods to set the tone or mood of a scene.

                                        Torches  Film-makers typically deploy two forms of torch to light an actor’s way: handheld burning brands or those attached to a wall. At a time devoid of electricity (or gaslighting), torches provided much needed illumination, but fire was a constant hazard in the ancient and mediaeval world. Property owners, apartment dwellers, city magistrates, and even monarchs lived in fear of the potential damage caused by unchecked fires, particularly in urban areas. While burning torches were carried by the joyous celebrants at Roman era weddings, the very same torches still retained their potential to cause harm and, in some cases, signalled the potential for violence to break out.

                                        After Gaius Julius Caesar’s assassination, in 44 BC, Rome’s citizens gathered in the Forum to hear erstwhile colleague Marcus Antonius’ eulogy. They collected pieces of wood and furniture from the surrounding locality to make an ad hoc funeral pyre upon which to burn the dictator's body. Fired by Anthony’s words - literally - many of those present then grabbed pieces of flaming wood as torches from the pyre. As the historian Plutarch noted: ‘people rushed up from all sides, snatched up half-burnt brands, and ran round to the houses of Caesar's slayers to set them on fire.’ It is unsurprising that rioters wishing to arm themselves would take to Rome’s streets brandishing a fax (Latin for a ‘torch’ or ‘firebrand’).

                                        Burning brands  In most movie settings it is unclear what flammable material is being used with the typical burning brand. In most cases it appears to be some form of material coated in a flammable fuel, perhaps pitch or animal fat, wrapped tightly about the end of a wooden stick. To be of use the burning torch is typically held in front of the bearer, but this creates a few problems. Firstly, as the layers burn off little pieces of flaming cloth or dripping hot fat or pitch can become a hazard to the user. At the same time, the bearer is effectively walking into, and breathing, acrid smoke while being dazzled by the torchlight itself. Furthermore, the flammable material can burn off quite quickly and would need replacing frequently, which is something you rarely see in films (in much the same way that, in the old cowboy movies, six-shooters seemingly never ran out of bullets!). Forget the movies.

                                        Sconces  The second type of burning brand are usually set in a bracket or sconce affixed to a wall. These torches are typically consist of an open cup containing the burning fuel, although precisely what sort of fuel is often unclear. Some type of pitch soaked wood or other material (cloth perhaps?) may have been preferred, or perhaps larger candles were used. Regardless, there are several problems with how this type of torch is often depicted on film.

                                        Firstly, a flickering flame produces inconsistent lighting which, depending on how the torch is positioned, may not illuminate an area or corridor particularly well. As before, if you were moving from place to place, it is far more convenient to carry a light source with you.

                                        Secondly, torches are largely impractical indoors because, depending on the fuel used, they can create a lot of obscuring, noxious fumes. Such smoke is potentially lethal in a poorly ventilated space such as in a tunnel or windowless corridor.

                                        Thirdly, lighting is often depicted positioned high up on walls well above an average person’s reach. As an anti-tampering measure this makes a great deal of sense and can be readily seen with building perimeter security lighting. For period pieces, positioning at height is not too much of a problem if gas or electric powered downlighting is used to illuminate an area. But consider the rather typical ‘Mediæval’ example pictured in a scene from the film ‘Ironclad’ (2011):

                                        1. Two large burning wall sconces are shown, blazing away one either side of a gatehouse entrance. Set within the much larger, imposing arched gateway is a portal presumably secured by a door. Taking the height of said portal to be the standard size of just over 2 m, then it seems the wall scones are set even higher. How do the castle’s occupants refuel these torches? While it is not inconceivable that a servant had to regularly position and climb a ladder to reach the bowls to refuel them, this is labour-intensive and not very practical.

                                        2. The sconces shown are best described as uplighters. Precisely what illumination value would these torches offer? Most of the light generated would be lost, although at night the castle walls would be bathed in a comforting, warm glow. Only a relatively small area of ground below would be lit, and then only poorly. In this example, the torches would only illuminate either side of the gateway which, let us be honest, is unmissably large, even at night. In other words, they again serve no practical purpose.

                                        3. Moreover, it is daylight, why are the torches burning at all! They are not providing any additional light, so why waste fuel? Fuel is expensive (something we can all identify with currently). Cutting or foraging for wood, or the processing of animal fat, would have been a time consuming and once again labour-intensive activity. The Castelain charged with running the castle probably would be loathed to squander his lord’s money recklessly in this manner.

                                        Just like the oddly placed braziers burning in street scenes for no apparent reason, these wall torches, and many other forms of naked flame depicted in movies, must be using a gas mantle supplied by a hose concealed within the sconce or, in the case of a brazier, camouflaged in some manner along the ground. Clearly gas fuelled fires are preferred by set designers and lighting specialists as they provide a safe, continuous, adjustable and reliable flame. Moreover, the ‘atmospheric’ lighting they produce can still be supplemented by modern studio lights.

                                        Night owls  The invention of gaslighting followed by the electric lightbulb has allowed us to be far more active well into the night. Before their advent most folk rose with the dawn, worked during daylight hours, and went to bed earlier as darkness fell. In other words, they took advantage of natural sunlight and worked outside if possible. As already mentioned, oil lamps, rushlights and candles were all options to light the way from place to place. Wealthier folk could afford to burn oil or candles to light their evening or night-time activities. For poorer people, rushlights were an alternative but even these came with a cost, either in time and resources to make them, or money to buy them. In other words, the oil for lamps, or candles and rushlights were relatively expensive to burn and would not be wasted frivolously.

                                        Considering the screenshot below from the film 'Barry Lyndon' (1975), why do so many films and TV shows depict scenes with gratuitous numbers of candles burning even in daylight? Oddly, this even applies to electric lighting in much more modern settings. The answer seems simple enough. Lighting technicians are employed to create the director/producer’s desired ambience or ‘mood’. So, lights, candles, etc. are left on or burning in positions where there is no obvious lighting need, and at inappropriate times of the day, simply to set the scene. Yet, it is also worth bearing in mind that even high-definition cameras do not ‘see’ as does the naked eye. By their very nature cameras need light to capture images. Too much and the image appears over-exposed or suffers ‘white out’. Too little and the camera struggles even to register an image. While these effects might be useful to enhance the storyline, viewers may struggle to see what is going on, which can be both frustrating and irritating. With this in mind, it just makes sense for lighting technicians to err on the side of caution regardless of how historically inaccurate the scene then becomes.

                                        And finally…This has either been a rant on some pet peeves with media representations of historical themes or food for thought. Regardless, thank you for reading this far. Until next time, bon appétit.