Tuesday, December 20, 2022

Dispelling Some Myths: ‘Blitzkrieg’ was a ‘thing’

‘Blitzkrieg’ is a compound of two German words whose literal translation means ‘lightning war’ [1]. Military historians have defined ‘Blitzkrieg’ as the employment of the concepts of manoeuvre and combined arms warfare developed in Germany during both the interwar period and the Second World War. The problem is that the word did not enter official terminology of the Wehrmacht either before or during the war. Indeed, pre-war use of the term is rare. Karl-Heinz Frieser researched the origin of the term for his book 'Blitzkrieg Legende' and found only two examples:

The military Journal ‘Deutsche Wehr’ of 1935 used the term in an article on how states with insufficient food and raw materials supply can win a war.

Three years later the ‘Militär-Wochenblatt’ of 1938 defined ‘Blitzkrieg’ as a ‘strategic attack’ carried out by operational use of tanks, air force, and airborne troops.

It is true, however, that during the interwar period aircraft and tank technologies had matured and were thus combined with systematic application of the traditional German tactic of Bewegungskrieg (manoeuvre warfare), deep penetration of enemy lines, and the bypassing of enemy strong points to encircle and destroy enemy forces in a Kesselschlacht (‘cauldron battle’). Strategically, the intention was to swiftly effect an adversary's collapse through a short campaign fought by a small, professional army. Operationally, its goal was to use indirect means, such as mobility and shock, to render an adversary's plans irrelevant or impractical. To do this, self-propelled formations of tanks; motorised infantry, engineers, artillery, and ground-attack aircraft operated as a combined-arms unit.

In English  It was first popularised in the English-speaking world by the American news magazine Time describing this form of armoured warfare during the German invasion of Poland in 1939. Published on September 25th, 1939, well into the campaign, the account reads:

‘The battlefront got lost, and with it the illusion that there had ever been a battlefront. For this was no war of occupation, but a war of quick penetration and obliteration - Blitzkrieg, lightning war. Swift columns of tanks and armored trucks had plunged through Poland while bombs raining from the sky heralded their coming. They had sawed off communications, destroyed animals, scattered civilians, spread terror. Working sometimes 30 miles (50 km) ahead of infantry and artillery, they had broken down the Polish defenses before they had time to organize. Then, while the infantry mopped up, they had moved on, to strike again far behind what had been called the front.’

A year later, the term had gained popular traction in the Western media to describe the highly successful German manoeuvre operations in the campaigns of 1939-1941 where ‘Blitzkrieg’ capitalized on surprise penetrations (the penetration of the Ardennes Forest region for example), the Allies’ general unpreparedness and the latter’s inability to match the pace of the German attack.

Despite its common usage by journalists, including in German wartime propaganda itself, the word ‘Blitzkrieg’ was never used by the Wehrmacht [2] as an official military term and was never officially adopted as a concept or doctrine. According to David Reynolds, ‘Hitler himself called the term Blitzkrieg 'A completely idiotic word' (ein ganz blödsinniges Wort)". Johann Adolf von Kielmansegg, a senior officer in the Heer (army), even disputed the idea that it was a military concept. General Kielmansegg asserted that what many regarded as ‘Blitzkrieg’ was nothing more than ‘ad hoc solutions that simply popped out of the prevailing situation’. The Wehrmacht were better trained and motivated to combine Bewegungskrieg (manoeuvre warfare) with Auftragstaktik, in which the commander expressed his goals to subordinates and gave them discretion in how to achieve them. In simple terms, much more authority was delegated to local commanders enabling them to make speedy, executive decisions and thereby increasing the tempo of operations and wrong-footing the enemy.

Popular usage  ‘Blitzkrieg’ has since expanded into multiple meanings in more popular usage. From its original military definition, ‘blitzkrieg’ may be applied to any military operation emphasizing the surprise, speed, or concentration stressed in accounts of the Invasion of Poland. During the war, the Luftwaffe terror bombings of London came to be known as ‘The Blitz’. Similarly, ‘blitz’ has come to describe the rush tactic of American football, the form of chess in which players are allotted very little time and, in law enforcement, a type of fast, ruthless attack on the person.

Endnotes:

1. ‘Blitz’ in German means ‘flash, lightening or thunderbolt’, while ‘krieg’ means ‘war’.

2. 'Die Wehrmacht' was the name given to the unified armed forces of Nazi Germany from 1935 to 1945. The Wehrmacht consisted of the Heer (army), the Kriegsmarine (navy) and the Luftwaffe (air force).


Thursday, December 08, 2022

About History: The Who’s Who of Beefeaters

It seems nobody knows for certain why the King’s Body Guard of the Yeoman of the Guard are called ‘Beefeaters’. Over the years various explanations have been offered. Perhaps the term originated in Old English for a servant. One of the Beefeaters’ original roles was to attend the monarch at mealtimes so perhaps the name comes from buffetier, Old French for a type of waiter.

After the battle of Bosworth Field in 1485, the new king, Henry VII, conferred the title of ‘Yeomen of the Guard’ on the men chosen to protect his person. In doing so, Henry was also proclaiming to the people that his body-guard had been selected not from the nobility, but from that class just below them [1] who had proved themselves to be the national strength of the country at home and abroad. The Yeoman of the Guard are, therefore, the oldest of the Royal bodyguards and the oldest military corps in existence in Britain. Today there are 73 Yeomen of the Guard, all of whom are former warrant or non-commissioned officers of the British Armed Forces, who wear a distinctive Tudor-style uniform of red, white and gold symbolic of their origin.

It is thought that the nickname ‘Beefeater’ may reflect that the Yeoman of the Guard received a substantial beef ration from the Tudor king’s table. Much later an Italian visitor to the court of Charles II noted the guardsmen were called ‘beef-eaters’ for this very reason.

People often confuse the Yeoman of the Guard (The Body Guard) with the Yeoman Warders who guard the Tower of London. The reason for this is easily understood as both are popularly known as ‘Beefeaters’, and their ‘State dress’ uniforms are similarly styled. To confuse the matter further, the Yeomen Warders also wear the ceremonial Tudor-style scarlet uniform but only on specific ceremonial occasions and crucially without a cross-belt. The everyday dark blue and red ‘undress’ uniform is the one generally seen being worn by Warders on duty at the Tower of London.


The Yeoman of the Guard, however, are solely headquartered in St James’ Palace and have no duties at The Tower of London. Indeed, most Yeomen have a full time second career outside of The Body Guard. Most also live across the British Isles and are only summoned for duty on Ceremonial Occasions (see box above). 

When wearing their scarlet uniforms, the Yeomen of the Guard can be distinguished by the distinctive cross belt worn from the left shoulder. Originally this belt had a practical function to support the weight of a cumbersome Arquebus [2] when this heavy matchlock firearm was carried.  While the Arquabus has not been fielded since the 1600s, the cross-belt has been retained and is worn with pride. The Yeoman of the Guard also carry a sword, which is never drawn unless to protect the sovereign, and a polearm known as a 'partisan'. The latter’s blued steel head is gilded with the Royal Arms and Royal Cypher and Crown, and is attached to long gilt socket, below which is a large yellow and crimson tassel. The blade is attached to a wood shaft just under 2m long.

References:

BBC History Magazine, November 2022, ‘Q&A’, p. 60.

‘The Royal Family’ website, ‘Yeoman of the Guard’, available online (accessed December 6th, 2022).

The King's Body Guard of the Yeomen of the Guard‘ website (2022), available online (accessed December 7th, 2022).

Endnotes:

1. In the social class structure of the 15th century Yeomen were ‘gentlemen just below the rank of esquire.’

2. The muzzle-loading Arquebus, also called Harquebus or Hackbut, was a long, portable smoothbore matchlock gun. Although it was the first gun fired from the shoulder it was generally fired from a support, against which the recoil was transferred from a hook on the gun. Invented in Spain in the mid-15th century, the weapon’s name seems derived from the German Hakenbüchse meaning ‘hooked gun’. The harquebus had an effective range of less than 200 m. It was superseded by the larger musket in the mid-16th century.


Friday, November 25, 2022

Victorian Schools

We have an image that school days in Victorian Britain were strict and corporal punishment rife. Daily life in the elementary school, however, was often a battleground between idealistic educationalists, government interlopers, struggling parents and hard-pressed teachers. Not to mention the children.

A patchwork of school systems operated across the country. The experiences of learning in one classroom could be very different from the next. Those who could afford the small fee might attend local parish or church schools. An alternative, for those willing to embrace a more informal approach to education, were privately run ‘dame schools’ that operated out of local teacher’s homes.

Early in Queen Victoria’s reign (1837 to 1901), child labourers were supposed to receive schooling from their employers under the Factory Act, while those found homeless or begging might be sent to tough industrial schools to learn a trade.

Another option for those unable to fund their own education were schools established by charitable organisations. Most notable were the so-called ‘ragged schools’, formed in 1844 to offer free education to Britain’s poorest children. By 1870 there were around 350 such institutions across the country.

For many of these children just making it to school each morning was an achievement. A child attending school often meant the poorest families losing an income they simply could not live without. Records show that girls were more likely to miss school than their brothers. In families that could not cope, they would be the first to be hauled back home to help out. Truancy levels also rocketed when seasonal work was available.

By 1880 the Elementary Education Act had made school attendance compulsory, which meant that specially appointed officers could slap parents with fines and even threaten them with prosecution. Attendance and punctuality became central components of ‘good behaviour’. Efforts to encourage or ensure such good behaviour can still be seen in many schools today.

Those who did make it through the school gates each morning were not necessarily ready to learn. Many children were on the ‘half-time system’, which allowed them to fit schooling in alongside work. This relentless schedule undoubtedly took its toll on young learners, with teachers frequently recording concerns about children arriving already exhausted after a long shift at work.

Bells marking the beginning of the school day rang out at different times across Britain. With evening and weekend sessions employed to fit in with working hours, there was no universal starting time.

From an 1845 timetable each morning began with prayers and singing, followed by catechism with analysis and scripture proofs. Religion was a key element of every school day even though expectations of biblical knowledge were unreasonably high.

The classrooms in which lessons tool place varied as much as the schools themselves. In the early 19th century, many schools were run on a monitor system - in which all children were massed in one large hall, to be taught in small groups by older pupils. By the middle of the century classrooms similar to today’s began to emerge. Classes of up to 40 pupils were not uncommon, and sizes could stretch to 80 during staff shortages. As there were no set ages for entering or leaving the education system, classes would be organised according to ability rather than age, in a system known as ‘standards’. Wherever possible boys and girls were separated (gendered entrances were a common feature in purpose-built schools), but resources often simply did not extend far enough.

With schools generally placed in the centre of communities, pupils would usually be given a long lunch break (up to two hours) to head home for food. Free lunches would not be provided on a national scale until the 20th century, but sometimes food was donated by charitable organisations, or even the teachers themselves. This demonstrates the important welfare role that a school could play in a poor community.

After lunch a monotonous afternoon of vocational subjects awaited pupils of industrial schools. These would often be divided by gender - while girls studied sewing and later cooking, boys might be offered commercially orientated drawing or woodwork.

Alongside religious studies, the three ‘Rs’ were the backbone of most school programmes. Pupils would typically find themselves ‘reading from miscellaneous books’, ‘writing on slates’, and performing ‘arithmetic from blackboard’.

One way to raise classroom standards was to enforce strict discipline. In popular culture, visions of Victorian corporal punishment are all too familiar. While the physical punishment of children was both legal and widely accepted at home and even in the street, the idea of schools terrorised by cane-wielding masters is not universally accurate. In fact, the overuse of corporal punishment was a sign of failing to maintain strict classroom control. From the 1890s it became standard practice to record all punishments in a book, to be checked by inspectors. Teachers noted that the most effective forms of punishment were often non-physical, especially those that involved an element of ‘naming and shaming’ - whether that meant a child wearing a dunce’s hat or a sign around their neck.

By the 1890s reforms to the restrictive ‘payment by [examination] results’ system meant that those wishing to broaden the minds of their young charges could increasingly afford to venture into subjects such as geography, science and history. Whether those lessons would inspire fascination or boredom was largely down to the teacher.


Tuesday, November 22, 2022

On This Day: Blackbeard's Demise

November 22nd, 1718: On This Day probably the most notorious pirate, Edward Teach, known as ‘Blackbeard’, met his demise. Towards the end of the so-called ‘Golden Age of Piracy’, Teach had terrorised the Atlantic seaboard of the Americas for two years.

He killed fewer people than other pirates, captured only one major ship, and did not plunder enough goods to make himself wealthy. Yet, by the time of his death, he was practically a household name. How then did a seemingly average pirate become one of the most notorious in history?

Although little is known about his early life, Teach was likely born in Bristol, England. He may have been a sailor on privateer ships during Queen Anne's War before settling on the Bahamian island of New Providence, a base for Captain Benjamin Hornigold, whose crew Teach joined sometime around 1716. Hornigold placed him in command of a sloop he had captured, and the two engaged in numerous acts of piracy. Their numbers were boosted by the addition to their fleet of two more ships, one of which was commanded by Stede Bonnet, but toward the end of 1717 Hornigold retired from piracy, taking two vessels with him.

Teach captured a French merchant vessel, renamed her Queen Anne's Revenge, and equipped her with 40 guns. With this Teach’s celebrity quickly rose, largely in part due to his nickname ‘Blackbeard’. In an age where the fashion for men was to be clean shaven, he grew his thick black hair into a long mane and cultivated a beard that reached halfway down hie chest. To enhance his fearsome appearance he reportedly tied lit tapers under his hat to look like he had emerged from the depths of hell and so frighten his enemies. In this manner, Blackbeard was able to subdue his victims through fear rather than lethal violence.

Teach formed an alliance of pirates and blockaded the port of Charleston, South Carolina. After successfully ransoming its inhabitants, Queen Anne's Revenge ran aground on a sandbar near Beaufort, North Carolina. Teach parted company with Stede Bonnet and settled in Bath Town, where he accepted a royal pardon. He was soon back at sea and soon attracted the attention of Alexander Spotswood, the Governor of Virginia, who arranged for a party of soldiers and sailors to try to capture the pirate. On November 22nd, 1718 Blackbeard’s ship was cornered of Ocracoke Island, North Carolina.

During a ferocious battle, Teach and several of his crew were killed by a small force of sailors led by Lieutenant Robert Maynard. Indeed, Maynard engaged the pirate captain in single combat mortally wounding him. ‘Well done, lad!’ Blackbeard reputedly shouted, to injured to retaliate, before being decapitated. His head was mounted on the ship’s bowsprit to be displayed up and down the eastern seaboard of the North America.

Blackbeard’s final battle was much debated for years afterward resulting in a rich mythology. His skull was encased in silver and used as a drinking vessel, or so some said. Others claimed Blackbeard had fourteen wives and buried treasure. 

We can be sure that Teach, a shrewd and calculating leader, spurned the use of force relying instead on his fearsome image to terrorise those he desired to rob. Moreover and contrary to the modern-day picture of the traditional tyrannical pirate, he commanded his vessels with the permission of their crews and there is no known account of his ever having harmed or murdered those he held captive. He was romanticised after his death and became the inspiration for a number of pirate-themed works of fiction across a range of genres. One thing appears clear: no other pirate ever matched his infamy.

Reference:

Simon, R., (2022), ‘Q&A: Who was history’s most notorious pirate?’, BBC History Magazine February 2022, p. 48. 


Monday, November 21, 2022

A Brief History of Food: Victorian Innovation

The Sun never sets  When Victoria succeeded to the throne in 1837, Britain was already a global maritime trading power. From the late 16th century, Britain had spent nearly two centuries increasing its maritime and trading contact with Asia, Africa and the Americas. By the time Queen Victoria adopted the title of Empress of India in May 1876, Britain’s influence and access to exotic spices, foods and drinks extended across a quarter of the world.

By looking at three every day, popular beverages one can perhaps see the benefits of living in Victoria’s Empire. Tea, coffee and chocolate all had origins outside Europe; the first from the Far East and the latter two from the New World. Introduced in the 16th century, coffee became fashionable in 17th century English coffee houses where, for the price of a penny, customers purchased admission and a cup of coffee. Here people could gather to drink coffee, to socialise, learn the news of the day, and perhaps meet with others to discuss matters of mutual concern. The absence of alcohol created an atmosphere in which it was possible to engage in more serious conversation than in an alehouse. Coffee houses thus played a key role in both politics and the development of financial markets and newspapers.

Anyone for tea?  Just ten years after first coffee house opened in Oxford in 1650, the English diarist Samuel Pepys mentions drinking tea for the very first time. As Pepys was a member of the wealthy and fashionable London set, his failure to mention tea drinking before his diary entry for September 25th, 1660 suggests it was an uncommon practice. This was soon to change.

Catherine of Braganza is said to have popularised the Portuguese habit of tea drinking in England after her arrival and marriage to King Charles II in 1662. She was familiar with tea as traders had been importing it from the East to Catherine’s homeland, Portugal, for some time. Its high price and exoticism helped tea drinking become very fashionable in aristocratic circles and at the royal court where Catherine grew up. Once in England, her taste for tea likewise became the vogue at the British royal court. Tea drinking’s popularity spread through aristocratic circles and then to the wealthier classes. By the 19th century, as the price of tea dropped, its use became widespread, and tea became a staple of Victorian Britain.

As for Food  At the beginning of Victoria’s reign Britain was still a rural nation with four-fifths of the population living in the countryside. Almost all food was still produced locally, and since most people lived in the countryside, they had ready access to it. For most, seasonal crops would be supplemented with preserved and pickled foods.  

The Industrial Revolution rapidly gained pace during Victoria's reign largely due to the harnessing of steam power. Victorian engineers developed bigger, faster and more powerful machines that could run whole factories. The substantial increase in the number of factories, particularly textile factories or mills, and the invention of new machines that could perform labour intensive tasks in a fraction of the time left many people out of work. The rural population flocked to the towns in search of jobs in the new industries. By the middle of the 19th century over 50% of the population lived in towns and cities.

Much as today, there was great disparity between rich and poor. While the wealthy ate a tremendous amount, and wasted far too much of it, a large proportion of the population relied on a simple diet of bread, dripping, vegetables, and tea. With such a dramatic increase in the urban population it became imperative to find new ways to transport and store food. The arrival of steam ships and the railways made it possible to move the basic foodstuffs - grain, flour, potatoes, root vegetables and beer - at speed and over greater distances. 

Other innovations making food distribution easier included long-life products such as condensed milk, dried eggs and soups, and bottled sauces. In 1865 Britain’s first meat-canning factory was established, and by the 1870s almost every middle-class kitchen had a tin opener. By the 1880s refrigerated transport became possible allowing meat for example to be moved over even greater distances. Large-scale imports of meat from the Americas meant it became cheaper and a regular part of the diet of all classes for the first time.

Imperial Influences  During the period of British rule over India, known as the British Raj (1858 to 1947), the Victorians started appropriating and adapting Indian recipes to create an Anglo-Indian cuisine with dishes such as Kedgeree (1790) and Mulligatawny soup (1791). Indian food was served in coffee houses from 1809 and cooked at home from a similar date as cookbooks of the time attest. In her cookbook Modern Cookery for Private Families (1845), Eliza Acton recorded recipes for curries which, being a favourite dish of Queen Victoria, became evermore popular amongst ordinary Britons. Today curry is considered a national dish.

Popular Cookbooks  During the Victorian era, the diverse nature of English cooking was finally collected and made available to the middle classes by a series of popular books. Certain authors became household names. One of the first was Mrs Rundell whose cookbook, A New System of Domestic Cookery, was published in 1806. It went through sixty-seven editions by 1844, selling hundreds of thousands of copies in Britain and America. A year later food writer and poet Elizabeth ‘Eliza’ Acton's Modern Cookery for Private Families was published. This was one of Britain's first cookbooks aimed at the domestic reader and, uniquely, introduced the practice of listing ingredients and giving suggested cooking times for each recipe. Acton's innovative layout described the cooking process followed by listing the ingredients and the total cooking time required for the preparation of the dish.

Modern Cookery contains mainly English recipes, although Acton labelled several of them ‘French’. One chapter, however, focuses on curries and provides recipes for Eastern ‘chatneys’ (or chutney), which are treated as naturalised Anglo-Indian dishes rather than of exclusively Indian origin. In a series of firsts, the book contains the earliest mention of ‘Christmas pudding’, which had hitherto been called ‘plum pudding’, the first recipe for brussels sprouts, and the first use in an English cookbook of the word ‘sparghetti’ [sic.].

Rather unfairly Acton’s work has been overshadowed by the most famous English cookery book of the Victorian era, Mrs Beeton's Book of Household Management. In 1857, when Isabella Beeton began writing the cookery column for The Englishwoman's Domestic Magazine, edited and published by her husband, Samuel, many of her recipes were taken from readers’ submissions or plagiarised from other works, especially Eliza Acton’s Modern Cookery. In 1859 the Beetons launched a series of 48-page monthly supplements to The Englishwoman's Domestic Magazine. When October in 1861 the 24 instalments were published in one volume as Mrs Beeton's Book of Household Management, the book went on to sell 60,000 copies in the first year, and nearly two million copies in the next seven years to 1868.  

Unlike Acton, whose book was to be read and enjoyed, Mrs Beeton's Book of Household Management was a manual of instructions and recipes to be referenced as needed. If evidence of her plagiarism was needed, then Beeton’s recipes copy the novel layout of Acton's Modern Cookery, albeit with one major alteration. Where Acton’s specifies the method of cooking followed by a list of the required ingredients, Beeton lists the timings and components before the cooking process. This latter format is still the commonest format today.


Sunday, November 13, 2022

Dispelling Some Myths: 'Trench Art'


One of our favourite sources of entertainment and ideas for this Blog are derived from the BBC’s ‘Bargain Hunt’ television series. As regular viewers, and frequent visitors to antique centres, our knowledge of antiques and collectibles has improved no end over the last few years courtesy of the programme. For those unfamiliar with the show, it is an entertainment programme where two pairs of contestants are challenged to buy antiques from shops or a fair and then sell them in an auction for a profit. Fairly frequently contestants are drawn to decorated spent artillery shell cases which their guiding expert will, without doubt, call ‘Trench Art’. What then follows is the explanation that such objects were handcrafted by soldiers in the trenches of the First World War.

It is an image that appeals to modern notions of desolate soldiers whiling away their boredom and fear in the wet, muddy trenches at the Front making commemorative pieces to send back to their distant loved ones. While this is an emotive picture, it is one that is largely a fantasy. For starters, soldiers on both sides in the First World War did not spend all their time in the trenches. Describing the typical daily life experienced by soldiers in the Great War, Senior Curator at the British Library Paul Cornish wrote:

‘For the soldiers of the First World War fighting was an exceptional circumstance, rather than the norm. For many, life consisted of toiling to keep those at the front supplied. But the frontline troops themselves were rotated to ensure that time spent facing the enemy was balanced by periods of rest and, occasionally, home-leave. The determination of soldiers to keep fighting could be strongly influenced by the regularity of this rotation.’

Already the notion that ‘Trench Art’ was crafted by soldiers actually in the trenches is untenable. Yet before we look at who may have made these objects, how do we define ‘Trench Art’? What is it?

What is 'Trench Art'?

To be truly classed as ‘Trench Art’ an item’s manufacture should be directly linked to armed conflict or its consequences. According to the Imperial War Museum, however, ‘Trench Art is a misleading term applied to a wide variety of decorative items, sometimes also functional, produced during or soon after the First World War. They were made in all the countries engaged in combat. Ashtrays, matchbox holders, letter knives, model tanks and planes are typically found. Often they are re-purposed lead bullets, brass recovered from spent charge cases, and copper from shell driving bands, although carved wooden and bone pieces, and embroideries are also seen.’ 

While the practice certainly flourished during World War I, 'Trench Art' also describes souvenirs manufactured by service personnel during World War II. Moreover, the history of the practice spans conflicts from the Napoleonic Wars to the present day. For the historian, ‘Trench Art’ provides tangible evidence for the materials available to the makers and, on occasion, may offer insights into the maker’s feelings and emotions about the war. Yet few examples were fashioned literally in the trenches. Nor were all made by soldiers.

Who might have made ‘Trench Art’?

There are four broad categories of ‘Trench Art’:

Items made by soldiers  Some servicemen certainly did make ‘Trench Art’ as souvenirs for themselves or as gifts for friends and family. Most, however, probably bought objects from vendors well away from the Front. Of the former, it is probable that only the very smallest bone and wooden objects were created in the actual front line trenches. The daily routine (see InfoBox) for those troops in close proximity to the enemy most likely did not allow much time for handicrafts. One need also remember that soldiers would regularly rotate through a basic sequence of being deployed or fighting in the front line, followed by a period of time in the reserve or support line, then rest and recuperation miles behind the front.

The source of most soldiers ‘Trench Art’ is therefore much more likely to be workshop troops in rear echelon areas. They had the materials, machinery, skill and occasional spare time to do so. More importantly, money could be made selling souvenirs to soldiers transiting the rear on their way home.
Wounded and convalescing soldiers were encouraged to work at handicrafts involving wood, metal and embroidery as part of their rehabilitation. ‘Trench Art’ was also made ‘at home’ during the war by those awaiting call-up.

Items made by POWs and internees  A second category of ‘Trench Art’ consists of items made by prisoners of war and interned civilians. POWs had good reasons to make decorative objects: free time and limited resources. Much POW work was therefore done with the express intention of trading the finished article for food, money or other privileges. Examples of straw work or scrimshaw (scrollwork, engravings, and carvings done in bone or ivory) still survive that were made by French soldiers imprisoned in England during the Napoleonic Wars of the early 19th century.

Items made by civilians  In France and Belgium work to make souvenirs was also given to civilians displaced by the war. Those unemployed because of the fighting were quick to exploit a new market to make money. Embroidered postcards were produced in what quickly became a cottage industry, with civilians buying the surrounds and embroidering a panel of gauze. These postcards depicted regimental crests or patriotic flags and national symbols in abundance, and millions were produced over the course of the war.

At war's end, when civilians began to reclaim their shattered communities, a new market appeared in the form of pilgrims and tourists. Over the ensuing twenty years mountains of discarded debris, shell casings, and castoff equipment were slowly recycled, with mass-produced town crest motifs being stuck onto bullets, shell casings, fuse caps, and other paraphernalia to be sold to tourists.

One often overlooked civilian source of ‘Trench Art’ was the major department stores. In the immediate post-war period they offered to turn war souvenirs such as shell fuses, often brought back by soldiers, into wooden-based paperweights for example. If an ex-soldier had no wartime souvenir, then the department stores could oblige. This source may indeed explain how the bulkier ‘Trench Art’, such as dinner gongs and poker stands made from shell charge cases, which clearly would not have fitted in a soldier’s kitbag came to be so widespread.

Commercial items  The fourth and final category is purely commercial production resulting from the post-war sale of tonnes of surplus government materiel. Undoubtedly some of this was converted to souvenirs of the conflict. Ship breaking, particularly if the ship had been involved in significant events such as the Battle of Jutland, resulted in wood from the ship being turned into miniature barrels, letter racks, and boxes. The addition of small brass plaques announcing the military connection or historical significance made such items commercially viable.

'Trench Art' today

‘Trench Art’ continues to be made today. Across the world, and especially in Africa and the Middle East, civilians and former combatants re-fashion munitions and other war detritus to meet a tourist and export market. So, while it is tempting to think that an ancestor hand-crafted a piece of ‘Trench Art’ held by a family, that may not be the case. There was a large commercial trade during and after the war. Objects may have been bought by the soldier, or by a relative on a subsequent battlefield visit. Moreover, in Europe (most notably in France and Belgium), original First World War shell casings are still being re-worked to meet a growing trade.

References:

Cornish, P., (2014), ‘The daily life of soldiers’, British Library, Available online (accessed October 5th, 2022).
Imperial War Museum (www.iwm.org.uk), ‘Trench Art’, Available online (accessed October 5th, 2022).

Monday, November 07, 2022

About History: the Scold’s Bridle

The ‘Scold’s Bridle’, sometimes known as ‘The Gossip’s Bridle’, was a punishment used officially and unofficially in England to discipline people, almost invariably women, who gossiped or spoke too freely. The name perfectly encapsulates the device’s role in controlling women whose speech was thought to be aggressive or disruptive, particularly towards husbands, addressing contemporary fears that such outspokenness could upset the prevailing gender power structures in communities. There is evidence that Bridles were used to punish blasphemers and religious dissidents of both sexes.

The earliest evidence for these devices dates from the end of the 16th century. Although specifications varied, presumably according to who commissioned it or who made it, a Scold’s Bridle was a large iron framework placed on the head of the offender, forming a type of cage. The cage integrated a metal strip, known as a ‘bit’, which, like a horse’s bridle bit, fit into the mouth to constrain the tongue and render speech impossible. Bits sometimes incorporated a spiked plate. or spikes. so that any movement of the tongue was certain to cause severe injuries to the mouth. The ’bridled’ person would be symbolically paraded through or exhibited in their local community as a form of public humiliation. The Bridle’s use both silenced individuals and signalled their misdemeanour and invoked personal shame.

In Scotland a more brutal version known as a ‘Brank’ featured extra prongs that extended farther into the offender’s mouth. In addition, a chain was attached to the back of the cage by which the individual was led around. Aggressive tugs on this chain could cause disfigurement and loss of teeth.

The use of such punishments lingered for centuries. The practice continued into the 19th century when Scold’s Bridles were employed in workhouses to discipline unruly women and those suffering from alcoholism.

Reference:

Nash, D., (2022), ‘What was a scold’s bridle?’, Q&A, BBC History Magazine, p. 36.

Tuesday, November 01, 2022

Ladies, Lamps and the Crimean War

The Battle of Waterloo, fought on June 18th, 1815, saw the final defeat and exile of Napoleon Bonaparte. Out of a common fear of revolutionary France’s re-emergence, the victorious Allies instituted the ‘Congress System’ which was intended to be the basis of Great Power relations in Europe post 1815. The Congress System was never likely to last, however. Not only did it depend on the fear of France but also the assumption that the real differences between the Allies could be overcome relatively easily. In the revolutionary year of 1848 such illusions helped were swept away and the Congress System collapsed. That year also saw the emergence of France, albeit temporary, as a major European power now ruled by Louis-Napoleon Bonaparte. He was keen to overturn the 1815 settlement so needed British help leading to his enthusiastic support of Britain during the complex diplomatic prelude to what would become known as the Crimean War.

The growing power of Russia worried the British more than the prospect of a revised France. The concern was closely linked with the decline of the Ottoman Empire, already causing international tension in the 1850s. No one, not even Russia, wished Turkey to collapse completely but all hoped to benefit if it did. When war broke out between Russia and Turkey in October 1853, British leaders feared that a victorious Russia would attempt to partition Turkey and annex Moldavia and Wallachia (in Romania). This state of affairs was unacceptable so the British and French fleets were ordered to Constantinople.

Russia’s Tsar, Nicholas I, refused to believe the Anglo-French alliance could last. Moreover, he was confident of support from Austria and Prussia, though this soon proved to be misplaced. When, in November 1853, a Turkish naval force was destroyed by the Russians at Sinope, public opinion in Britain rapidly turned in favour of military intervention. The British and French fleets entered the Black Sea in January 1854, and a treaty of alliance was signed with Turkey. In March, Britain and France declared war on Russia.

Perhaps conscious of Napoleon Bonaparte’s disastrous invasion of Russia in 1812 the Allies were deterred from a direct attack on Moscow. Sebastopol in the Crimean Peninsula, however, was the headquarters of the Russian Black Sea Fleet and thus the most likely source of further Russian naval operations against Turkey. The British and French decision to invade the Crimea was not as foolish as is sometimes suggested.

Likewise, the myth that Britain’s forces performed badly is somewhat unfair and most likely originated with the contemporary newspaper reports from correspondents such as William Russel of The Times. The Crimean War was one of the first to be fully covered in the popular press. Poor leadership, inadequate supplies and missed opportunities made for good stories, but the reports ignored British successes. As an example, the fate of the Light Brigade and its disastrous charge against Russian guns stole the headlines and ignored the successful charge made earlier by the Heavy Brigade.

One British success was not purely military. The treatment of Britain’s wounded soldiers, initially a scandal, was soon re-organised, a feat no other country achieved. Nurses such as Florence Nightingale and Mary Seacole became national heroines and had an important effect on the aspirations of at least some middle-class women.

Florence Nightingale OM RRC DStJ is still the better known of these two pioneering women. She was an English social reformer, statistician, and the founder of modern nursing. Nightingale came to prominence while serving as a manager and trainer of nurses during the Crimean War, in which she organised care for wounded soldiers at Constantinople [1].

Crimean War  Nightingale arrived at Selimiye Barracks in Scutari (modern-day Üsküdar in Istanbul) early in November 1854. Her team found that wounded soldiers were being poorly cared for by overworked medical staff in the face of official indifference. Medicines were in short supply, hygiene was being neglected, and mass infections were common, many of them fatal.

During her first winter at Scutari, 4,077 soldiers died there. Ten times more died from illnesses such as typhus, typhoid, cholera, and dysentery than from battle wounds. With overcrowding, defective sewers and lack of ventilation, in March 1855, some six months after Nightingale had arrived, the British government had to send the Sanitary Commission to Scutari. The commission flushed out the sewers and improved ventilation. Surprisingly, perhaps, Nightingale never claimed credit for the resulting sharply reduced death rate.

Back in Britain  Nightingale still believed that the death rates were due to poor nutrition, lack of supplies, stale air, and overworking of the soldiers. After she returned to Britain and began collecting evidence before the Royal Commission on the Health of the Army, she came to believe that most of the soldiers at the hospital were killed by poor living conditions. This experience influenced her later career when she advocated sanitary living conditions as of great importance. Consequently, she reduced peacetime deaths in the army and turned her attention to the sanitary design of hospitals and the introduction of sanitation in working-class homes.



Mary Seacole, by contrast, was a British-Jamaican nurse and businesswoman who established the ‘British Hotel’ behind the lines during the Crimean War. Coming from a tradition of Jamaican and West African ‘doctress’, Seacole relied on her skill and experience as a healer to provide succour for wounded service men on the battlefield and nurse many of them back to health.

Crimean War  Hoping to assist with nursing the wounded on the outbreak of the Crimean War, Seacole applied to the War Office to be included among the nursing contingent but was refused [3]. Seacole resolved to travel to Crimea using her own resources and to open the ‘British Hotel’. Business cards were printed and sent ahead to announce her intention to open an establishment near Balaclava, which would be ‘a mess-table and comfortable quarters for sick and convalescent officers’ [3].

Apart from serving officers at the British Hotel, Seacole also provided catering for spectators at the battles, and spent time as an observer on Cathcart's Hill, some 5.6 km (3½ miles) north of the British Hotel. As a sutler Seacole often visited the troops near the British camp at Kadikoi to sell them provisions. She is also said to have nursed casualties evacuated from the trenches around Sevastopol and from other battlefields. Through her activities she became popular among service personnel and was widely known to the British Army as ‘Mother Seacole’.

Rivals?  Some commentators say there was a frosty relationship between these two prominent figures in Crimea. How true is uncertain but we do know that Seacole's own memoir, ‘Wonderful Adventures of Mrs. Seacole in Many Lands’¸ only records one friendly meeting between the two women. When Seacole was in Scutari enroute to the Crimea to join her business partner and start their business she asked Nightingale for, and got, a bed for the night. When, however, Seacole later tried to join Nightingale's team, one of Nightingale's colleagues rebuffed her, and Seacole inferred in her memoire that racism was at the root of that rebuttal [3].

From her letters we know that Nightingale did not approve of Seacole’s methods. Nightingale wrote to her brother-in-law that she was worried about contact between her work and Seacole's business, claiming that while ‘she was very kind to the men and, what is more, to the Officers – and did some good (she) made many drunk’ [4]. Moreover, Nightingale reputedly wrote: ‘I had the greatest difficulty in repelling Mrs Seacole's advances, and in preventing association between her and my nurses (absolutely out of the question!)...Anyone who employs Mrs Seacole will introduce much kindness - also much drunkenness and improper conduct.’

Back in Britain  When the Crimean War ended abruptly in 1856, Seacole was left with a lot of expensive supplies that she could not sell at a fair price and consequently she suffered a great financial loss [6]. After the war she returned to England destitute and in ill health. The press highlighted her plight and in July 1857 a benefit festival, attracting thousands of people, was organised to raise money for her [5]. Later that year, Seacole published her memoirs, 'The Wonderful Adventures of Mrs Seacole in Many Lands' [5].

Recognition  Well known at the end of her life, Seacole rapidly faded from public memory in Britain. After almost century there has been a resurgence of interest in her and efforts to acknowledge her achievements, although whether she is an example of ‘hidden’ black history as some may claim is a moot point. Similarly Seacole's recognition as a pioneer of nursing remains controversial with many commentators, especially Nightingale supporters, arguing that Seacole's accomplishments were exaggerated.

Endnotes:

1. Strachey, L., (1918), Eminent Victorians, London: Chatto and Windus, p. 123: ‘Miss Nightingale arrived in Scutari - a suburb of Constantinople, on the Asiatic side of the Bosphorus - on November 4th, 1854; it was ten days after the battle of Balaclava, and the day before the battle of Inkerman.’

2. Cited in Cook, E. T., (1913), ‘The Life of Florence Nightingale’, Vol 1, p. 237.

3. Seacole, M., (1857), ‘Wonderful Adventures of Mrs. Seacole in Many Lands’, Chapter VIII, London: James Blackwood, pp. 73-81.

4. From a letter dated August 4th, 1870 held in archive of the Wellcome Institute (Ms 9004/59).

5. BBC (2014), ‘Mary Seacole (1805 - 1881)’, (accessed December 17th, 2021).

6. Seaton, H.J., (2002), ‘Another Florence Nightingale? The Rediscovery of Mary Seacole’, The Victorian Web, National University of Singapore, (accessed December 17th, 2021).

Monday, October 24, 2022

How To: Dress as an ancient Greek

This ‘How to:’ guide is a follow up on a previous post aimed at readers wishing to recreate simple yet effective historical costume. The focus for this guide, however, is on the ancient Greeks and the typical clothing worn from the 5th century BC Classical period until the 1st century AD and Roman rule. Three garments were the basis of Classical Greek dress: the khiton (pronounced kite-n), the peplos, an overgarment worn by women, and the chlamys (pronounced klom-iss), a cloak. These three garments were draped and belted to create various styles. To this list has been added the himation, a form of dress similar to the more famous Roman toga.

First off are a few practical pointers for the modern maker:

Material  The only truly acceptable cloth should be made from the natural fibres of linen or wool. It is recognised that sometimes modern cloth contains a mixture of these and cotton. This is tolerable compromise for those seeking to be as accurate as possible since the mix of fibres will not adversely affect the appearance or the draping qualities of the base material.

Construction  There is no reason why seams that are not immediately visible cannot be machine stitched. There are some people for whom this is an anathema as it is ‘not historically accurate’. We would argue that careful use of machine stitching is merely a practical measure (we live in the 21st century and are not actually ancient Greeks) providing visible seams, such as those in collars, sleeves and hems, are hand sewn.

Fastenings  Garments that were not sewn together were typically fastened using long pins (fibulae), brooches, or buttons and toggles made of bone or wood.

Himation

The himation (ancient Greek: ἱμάτιον / hə-MAT-ee-un) is the ancient Greek equivalent to the Roman toga. In its simplest form it was a large rectangular piece of woollen cloth, approximately 4 m to 5 m in length and 1.2 m to 1.5 m wide, worn by ancient Greek men and women from the Archaic through the Hellenistic periods (c. 750 BC to 30 BC). It was typically worn over a man’s khiton or woman’s ‘peplos’ (see below) being draped about the wearer’s body from shoulder to ankle. As shown right, men sometimes wore the himation alone without a khiton underneath. In this manner it served both as a khiton and as a cloak and was called an ‘akhiton’. Many vase paintings depict women wearing a himation as a veil covering their faces. 

Draping  It is unlikely that the himation can be simply 'slipped' on. Rather it may take the assistance of one or two people. Yet, evidence on how to put on a himation does not survive and modern wearers may have to experiment with the most effective way of doing so. The following guidance is offered to those assistants charged with dressing the himation wearer:

1.  The wearer stands erect with their arms extended laterally at shoulder height, i.e. in a cruciform stance. Other than holding a fold or slowly rotating when instructed, there is little else for the wearer to do.

2.  The cloth is prepared for donning by gathering the folds, which are then placed, from behind the wearer, over their left shoulder. The folds should be uppermost and hang down the wearer’s front, with the bottom edge reaching to between calf and ankle. The folds should be adjusted as required to drape properly. The wearer can assist by bending his left arm at the elbow and gripping the material in place.

3.  Keeping the folds together, drape the material across the wearer’s back, looping up under their right arm, across the chest (the wearer’s left hand must be out of the way) and once again over the left shoulder. Depending on the available space it may be advantageous to get the wearer to perform a slow quarter or half turn to the right.

4.  The remaining material should be draped along the length of the left arm to hang towards the left foot.

Khiton

The khiton (χιτών) is the base garment worn by both men and women. Essentially it is a rectangular piece of cloth folded laterally to form a tube with one side left open. The back corners were pinned to the front to form shoulder straps. Alternatively, khiton could be sewn at the shoulders and sewn from the underarm to the hem to form the tube. The difference between the sexes is the overall length of the garment and where the hemline ends. For women, dresses are typically shown full length with the hemline at least to the ankle. By contrast, Greek men tended to wear their khiton quite short above the knee at mid-thigh level allowing more freedom of movement in exercise, manual labour and in warfare. Some depictions show very short khiton barely covering the genitals [1].

Exomis

A variation on the khiton was the exomis worn, it seems, by men only (although some goddesses might be depicted in one). As for a khiton, it is a rectangular piece of cloth approximately 2 m long and at least 1 m wide worn with the hemline at mid-thigh or shorter. The material is folded in half laterally about the wearer’s body with the top of the fold beneath right armpit and fastened at the left shoulder. The exomis is then belted and the material arranged to drape evenly.

Material  For most people, clothing was made predominantly of wool or linen. The wealthy could afford very finely woven cloth, with some examples being especially sheer or translucent. For our purposes, the basic garment is easily reproduced from a rectangular piece of cloth approximately 2 m long and between 2 m and 3 m wide.

Pattern  As in our previous post, we will focus on a pattern for a woman’s khiton as the men’s version is essentially a shortened form either with or without short sleeves. In its simplest form the Doric style khiton is a folded rectangle of cloth with the two halves fastened with multiple fibulae (pins) or buttons at the shoulders, or simply sewn together. Doric style dresses worn by ancient Greek women may well have been left open along the line B-C (refer to the diagram below), with the two halves of the dress belted in place. If you are feeling particularly risqué then you could follow the ancient example, but we would suggest, for modesty’s sake alone, that the seam along B-C is sewn.

An alternative is the Ionic style khiton which was also a large piece of fabric folded laterally and then pinned at intervals along the arms and at the shoulders. Belted it formed voluminous sleeves when carefully draped.


In the diagram above, the head hole is formed between D-E which, from experience, needs to be at least 25 cm to 30 cm (c. 10 to 12 inches) wide. When folded in half, and if you decide to sew the shoulders together between A-D and E-F, then the cloth must be cut at F-G to allow the right arm to pass through. If you prefer to simply pin the garment at the shoulders with brooches at D-D and E-E, then the cloth would fall on each side and cutting the F-G armhole would not be necessary [2].

In the pattern above the rectangle of cloth needs to be approximately 2 m long and at least 3 m wide (once folded it will be 1.5 m wide).

Belts  Khiton should be belted at the waist. Excess material can be pulled up and bloused over the belt to achieve the desired length. Sometimes women’s dresses were belted twice, once at the waist and again at the hips, giving a double-bloused effect. Similarly, they are also depicted belted high under the breasts, or cross-belted over the chest and tied at the waist.

Peplos

While the Doric style khiton is perfectly acceptable attire for women, a peplos (Greek: ὁ πέπλος) is the more typical clothing for women in ancient Greece by about 500 BC during the late Archaic and Classical period. As with the khiton, the Doric peplos was a body-length (A-C) garment made from a rectangle of cloth folded about the wearer and open on one side of the body. In this case, however, the top edge was folded down about halfway to, or below, the waistline thereby forming an overfold called an apoptygma (pictured below). The folded top edge was pinned from back to front at the shoulders (D-D, E-E) and the garment gathered about the waist with a belt. The shorter, waist-length apoptygma might be belted beneath the material, while longer, below the waistline apoptygma are shown belted just below the bust. In either style the apoptygma provided the appearance of a second piece of clothing. The overfold should be arranged to drape evenly.


Hats and Cloaks

Ignoring helmets, ancient Greek men are often depicted wearing broad-brimmed, bell-crowned hats to protect against sun and rain. Called petasos (below left) they seem popular with travellers and may have been made of straw, felt or leather. A simpler style of straw cap or tatulus (below middle), perhaps favoured by labourers, were also worn. There are far fewer depictions of women wearing hats but this does not they were not worn. In the artist’s impression below right, the lady is shown with her outer garment, a himation, draped over her head, which is probably how most women went abroad outdoors. In ancient societies, particularly the Greeks and Romans, for a woman to be out in public without her head covered or with long flowing, loose hair was seen as a sign of impropriety - loose hair, loose woman. With her head dutifully covered she sports a small straw sun hat known as a tholia.

As instead of the himation or akhiton previously mentioned, when outdoors or travelling ancient Greek men wore a chlamys (right), a short hunting cloak. Once again it is basically a rectangle of, usually, woollen cloth that was draped over the left shoulder and pinned on the right. It could be worn over a khiton or alone, the latter being considered ‘manly’ to endure the elements in a single garment.

If one was to take inspiration from the earlier Etruscans, then a square-cut or semi-circular form of poncho known as a tabenna was seemingly popular in the 7th to 5th centuries BC.

Footwear

Going barefoot was common, especially for children, but the ancient Greeks also wore simple leather shoes when outdoors. Carbatina for example, featured soles and uppers cut from one-piece of leather. Loops cut around the leather’s edges allowed laces to pass through and draw the uppers together about the foot.

Unsurprisingly there is a large variety of footwear depicted in ancient Greek art and sculpture ranging in styles from soleae, sandals held in place by a leather thong or tongue between the toes, to krepidea that enclosed more of the foot. Ankle and calf-height boots are also shown [3].

Endnotes:

1. We are all for ‘authenticity’ but in a school or at a public event this might not be a wise choice professionally and legally speaking.

2. In other words the arms pass through the gaps A-D and E-F.

3. If portraying an ancient Greek character avoid wearing Roman caligae. While these are widely available to buy online, they are the distinctive and instantly recognisable footwear of Roman soldiers and thus wholly inappropriate for the Classical Greek period.


Monday, October 17, 2022

About History: Spectacles

One of the most curious objects in the Royal Armouries collection is the ‘horned helmet’, a bizarre headpiece commissioned in AD 1511 by the Holy Roman Emperor Maximilian I as a gift for the young King Henry VIII. According to the Royal Armouries, the helmet would have been part of a full armour worn by the King for court pageants.

The decoration on the grotesque mask is etched with life-like facial details such as stubble on the chin and crow’s feet around the eyes. This extraordinary helmet is distinctive for the pair of ram’s horns, beautifully modelled in sheet iron, sprouting from the skull, and the pair of spectacles that heighten its strangeness.

A number of images of fools wearing or carrying spectacles of this kind exist. The spectacles themselves are of so-called ‘rivet’ type, an almost universal design which hinged in order that they might grip the bridge of the wearer’s nose; forerunners of pince-nez. Spectacles of this type are known in Europe from at least the middle of the 14th century.

As iconic as this helmet is, however, it got us thinking, as spectacle wearers, when were these optical devices invented?

Innovative invention  The classical Roman writer Seneca is said to have read all the books in Rome by using a glass globe of water to enlarge the handwritten letters. Strictly speaking he was using a form of magnifying glass, but anything held to the eye and not worn on the face are categorised as ‘eyeglasses’ not spectacles. The innovative idea of wearing spectacles shaped in some way to sit on the face for long periods seems to have been a Mediæval European invention.

The wearing of spectacles to correct optical defects is so normal today that we barely think about it. The vast majority of people do not need corrective lens until they reach somewhere around the age of forty when it is quite normal for the crystalline lens of the eye to harden. This leads to presbyopia or farsightedness for which the convex lens in the first spectacles were intended to counter. The idea may have developed from ‘reading stones’ made from segments of glass spheres and used by presbyopic monks to read manuscripts by holding the glass against the letters (cf, Seneca’s glass globe).

Convex spectacles seem to have evolved by chance, not through optical theory, even though medieval Europe had acquired some scientific knowledge of optics from Islamic scholars. The Muslim mathematician and natural philosopher Ibn al-Haitham (c. AD 965 - AD 1039), called Alhazen by Europeans, wrote about the properties of lenses in a work translated from Arabic into Latin in AD 1266. A year later, the English monk and scientist Roger Bacon (c. AD 1214 - AD 1294) wrote about his experiments in using convex lenses to correct vision, advocating their use to help old people. It seems the first convex lensed spectacles were invented around AD 1285, and the first reference to them is contained within a manuscript written about the Popozo family from Tuscany, Italy dated to AD 1289.

Whether spectacles were invented in Pisa or Florence is uncertain, although for centuries patriotic historians of both Italian cities have reputedly altered manuscripts and invented evidence to claim the prestige of the invention for their city. Regardless, Venice became an early centre for the mass production of lenses. Having already produced the best ‘reading stones’ than elsewhere in Europe, the skilled glass blowers of the Venetian suburb of Murano went further to produce thicker and clearer glass that proved superiors for grinding high-quality lenses. Indeed, in AD 1301 the Venetian crystal workers' guild created the first regulations for producing ‘glass discs for the eyes’, and by around AD 1320 a guild of spectacle-makers had been established in Venice.

Medieval spectacles were riveted at the centre and had leather grips to hold on to the bridge of the nose. Some contemporary paintings (see right) show readers holding spectacles on the face by hand, and some frames were made of leather to reduce their weight. By the AD 1360s the early Renaissance writer Petrarch could refer to spectacles for the elderly as if they were commonplace in Florence, and in paintings of this period and of the 15th century they are often included in portraits of saints and scholars to signify piety and learning. By the late 15th century their use had spread so far outside the elite that artists increasingly used them to signify folly or senility.

Eventually, between AD 1725 and AD 1730, Edward Scarlatt of London invented sidepieces, or temples, by which nearly all spectacles are worn today.

Reference:

The Invention of Spectacles’, Encyclopedia.com, Available online (accessed August 26th, 2022).