Breakfast, Lunch, & Dinner (Supper, & Tea)

The names and details of our daily meals are relatively recent creations.

Breakfast

The clue being in the name, breakfast is the first meal of the day, the meal where you “break your fast” (the fast of not eating overnight in your sleep). That said this first meal of the day wasn’t always first thing in the morning like it is today. Up through the early Middle Ages people would rise and do without eating until after they had worked for several hours.

Further complicating things this late morning first meal of the day, before being called breakfast, was called dinner. From the Old French “disner” meaning “to break one’s fast” the first meal of the day only became “breakfast” in the 15th century. This early meal would be bread, maybe some cheese, and some alcohol (alcohol being safer to drink than water).

Dinner and Supper

As breakfast became breakfast, dinner moved from the 1st time slot to the 2nd. You would eat a small meal upon waking (breakfast), eat a large meal in the late morning to give you energy for the rest of your work (dinner), and then a small meal in the evening. The small meal at the end of the day was supper, from the French “souper”. This was typically a soup that you supped, a soup that was slow cooked throughout the day to be ready in the evening.

But dinner wasn’t done moving and moved again from the 2nd time slot to the 3rd, replacing supper as the last meal of the day. This change wasn’t all at once. The dinner shift in time slot was due to several reasons not least of which was the changing nature of how people worked. When people worked out of their homes or in an agrarian lifestyle in the fields near their homes, it was easier to prepare & eat a large meal in the middle of the day. Through the Industrial Revolution work moved to factories & offices and it became impractical to have a large meal in the middle of the day. As such dinner continued to be the biggest meal of the day but it moved to the end of the day when people returned from work.

That said, while “dinner” is the term most people use for the big meal at the end of the day some people (particularly those of agricultural backgrounds) still call this meal supper. Generally speaking though “dinner” and “supper” are seen as synonymous terms for the same meal. As such the Last Supper could have been the Last Dinner.

Lunch

With breakfast at the start of the day, and dinner now the last meal of the day, this left a time slot opening in the middle of the day. Lunch is essentially if dinner and supper switched places and supper changed its name. Starting in the 18th century lunch became a small midday meal, increasing in popularity as more and more people had their dinner at the end of the day.

Tea Time

So what is tea / tea time? After the Portuguese Catherine of Braganza introduced tea to England in the 17th century it eventually became a staple of British life. Tea as a meal took two forms: Afternoon Tea and High Tea. Confusingly, afternoon tea is the classier of the two.

Tea was originally had after the large midday meal of dinner, as tea was believed to assist digestion. As dinner moved to the end of the day tea time was created as a way to hold people over between lunch and dinner while still having tea after midday. Afternoon tea, as the name suggests, was served in the afternoon. It was a light meal of tea served with cucumber sandwiches, scones, cakes and other elegant snack foods – it’s tea time of the upper class (because who else had the time to break for fancy foods in the midafternoon?). High tea on the other hand was the meal of the working class. Working people couldn’t take a break midafternoon so they had their tea with heartier snacks after they came home in the evening but before their supper (or dinner).

As dinner replaced supper as the final meal of the day some people in British countries merged dinner and high tea, calling this meal “tea”.

Mischief Night

The night before Halloween is a night of mischief & destruction (in a select few cities).

For most people, the night before Halloween (which itself is the night before All Saint’s Day), is unremarkable – there’s nothing special about October 30th. For a select few areas however the night of October 30th is a night for pranks & vandalism. October 30th is Mischief Night.

English origins

Many cultures have recognized, if not fully sanctioned, annual traditions for mischief. The Alpine tradition of Krampus for example brings some chaos and mischief to the Christmas season. Since at least the 18th century England, and in particular the northern areas of Yorkshire and Lancashire, have had Mischief Night. Traditionally this was a night for young teens to engage in low-grade pranks on the night of April 30th before May 1st (the first day of summer). A particularly popular prank was to remove the gates from farms, releasing animals from their pens and creating havoc as the animals wandered about farms and town.

Over the centuries that mischief moved from the Spring to the Fall ending up on November 4th, the night before Guy Fawkes Night. As Ireland’s Halloween and England’s Guy Fawkes Night emigrated to America, Mischief Night went along for the ride.

Philly & Detroit

In 2013 the New York Times released a dialect quiz which was a survey of questions about what people from around the United States call things and how they pronounce them. One question was “What do you call the night before Halloween?” With a few small exceptions, overwhelmingly most of the country replied with “I have no word for this”, except for two areas: Philadelphia and Detroit.

Most of America thinks nothing of October 30th, but the Philadelphia and Detroit areas carry on the old English tradition for mischief.

The Philadelphia area, extending into New Jersey, calls October 30th Mischief Night. The Detroit area calls this night Devil’s Night. Just like their English predecessor these nights are times for pranks & vandalism. From the mild to the wild the mischief can include ringing doorbells, taking things from porches, egging houses, smashing pumpkins, throwing toilet paper into trees, and more. In Detroit this night has been known particularly for arson. In the 1980s as the city contracted in size people would set fire to abandoned homes. In 1984 there were more than 800 fires reported across the city, the most of any Devil’s Night. Arson has also sometimes been a feature of Mischief Night, such as 1991 in Camden, NJ when 133 fires were reported.

One of the curious things is how localized these traditions are. Mischief Night and Devil’s Night exist in their geographic areas and have never spread much further despite existing since at least the mid 19th century. One theory for Mischief Night’s limited reach is the provincial nature of the Philadelphia area. Other major American cities have more of an even mix of new vs born-and-raised residents, which means customs spread. Most people in the Philadelphia area however were born around Philly and have never moved away – 68% of Philadelphians were born in Pennsylvania. As such the cultural traditions, food, sports loyalties, accent, and other jawn stay near the city and never leave the Philadelphia area. In this way, the mischief of Mischief Night has remained contained to the region – to the relief of people everywhere.

Added info: Philadelphia is at the center of another divisive Halloween tradition: candy corn. As the story goes, candy corn was invented by George Renninger at the Wunderle Candy Company in Philadelphia around 1898. The authenticity of these exact details however are questionable and are considered “oral history.” The tri-colored candy is tasty to some but to others the honey vanilla flavor and waxy texture is off-putting.

The National Confectioners Association has created a “National Candy Corn Day”, the date of which is October 30th, Mischief Night.

Sphinxes

The mythical sphinx spans thousands of years around the ancient world. Also, technically, the Great Sphinx of Giza isn’t a “sphinx”.

The sphinx is a human-animal hybrid chimera (except not a literal chimera). At its most basic it is part human part lion with other design options available depending on the culture.

Egypt, the protector sphinx

The first human-lion hybrids come from Egypt. While most Egyptian human-animal hybrids are animal heads on human bodies, the sphinx is the other way around. To borrow from Spinal Tap, “No one knows who they were or what they were doing”no one knows what these creatures were called in Egyptian culture nor is anyone exactly sure what they were meant to do. It’s thought they were created as protectors, defending royal tombs, but nobody is certain. They were frequently carved with the face of whichever pharaoh’s tomb they were beside and as such most Egyptian sphinxes are male.

Egyptian sphinxes are generally male and thought to be protectors of royal tombs but nobody is certain.

As for the largest, oldest, and most famous sphinx of them all, while it was built somewhere between 2600 BCE and 2500 BCE, no one is exactly sure who built the Great Sphinx of Giza or why. It thought to have been commissioned by (and is thought to have the face of) the pharaoh Khafre. It’s positioned facing East near the Great Pyramid of Khufu (the tomb of Khafre’s father). Khafre also built himself a pyramid caddy corner to his father’s, just 10 feet shorter. 

The Great Sphinx of Giza is the largest, oldest, and most famous sphinx. He used to have a nose and a beard and was possibly painted, but all three features have been lost over time.

It’s hard to appreciate just how old the Great Sphinx is (and how long sphinxes have been a part of Egyptian culture). The pyramid complex had been built and subsequently abandoned so long ago that the Sphinx was buried in sand up to its shoulders by the time the first excavation attempt took place in 1400 BCE. That means the first excavation was around a 1000 years after the Sphinx was built and that was still around 3400 years ago. Trying to rescue the Great Sphinx from the desert sands has been going on for thousands of years.

The Greek sphinx is one particular sphinx. She is famous for her riddle and her role in the story of Oedipus.

Greece, the monster sphinx

Sphinxes spread counterclockwise around the Mediterranean from Egypt to the Middle East, to Mesopotamia, and into Greece around 1600 BCE – the visual design and meaning changing along the way. In Greek mythology there was a single sphinx (not numerous sphinxes like in Egypt) who was also a human-lion hybrid but was female and she had wings.

The Greek sphinx comes to us through the story of Oedipus. This sphinx is more of a monster than her Egyptian counterparts (she is inline with other Greek female monsters, like the gorgons). As Oedipus is traveling to Thebes he encounters the sphinx. The city of Thebes is at her mercy as she offers a challenge to all who want to enter the city: she will grant safe passage if you can successfully answer a riddle. If you fail she kills you. Oedipus correctly solves the riddle and the sphinx (dramatically) kills herself … and this isn’t even the craziest part of the Oedipus story (paging Dr. Freud).

The word “sphinx” was both the specific name of the sole Greek sphinx as well as a general term the Greeks used for these kinds of creatures (like what we do today). That said, the word “sphinx” is of Greek origin and so technically outside of Greece these creatures aren’t “sphinxes”. While the Greeks may have called the Egyptian creatures “sphinxes” the Egyptians did not. The word “sphinx” didn’t even exist until over 2000 years after the Great Sphinx of Giza, so again what the Egyptians called these things is something else unknown.

The Greek sphinx also influenced South and Southeast Asian cultures where sphinxes are seen as holy guardians at temples and other religious sites. In these places the sphinxes are meant to ward away evil and cleanse the sins of religions devotees.

Sphinxes have appeared in art around world over the centuries but especially during the 19th and early 20th centuries.

Egyptomania

Sphinxes (both the male Egyptian kind and the winged female Greek kind) made appearances in European art from the 15th century onward but their greatest surge in popularity was during the 19th century Egyptology and Egyptomania craze. After Napoleon’s campaign in Egypt from 1798-1801 the French brought treasures to France which led to an interest in all things ancient Egypt. Bits of this can still be found in Egyptian Revival architecture which features pyramids, sphinxes, and other Egyptian motifs.

Also, on the topic of the French in Egypt, Napoleon’s troops did not shoot off the Great Sphinx of Giza’s nose. One story is that around 1378 CE a Sufi Muslim named Muhammad Sa’im al-Dahr destroyed the nose in an attempt to stop a cult that was making religious offerings to the Great Sphinx. Muhammad Sa’im al-Dahr was supposedly executed for defacing the Great Sphinx. The Great Sphinx also had a beard but it most likely fell off from erosion of sitting in the desert for thousands of years.

Added info: Egyptian culture had yet another resurgence in western popularity with the 1922 discovery of King Tutankhamun’s tomb. Two years later in 1924 H.P. Lovecraft was the ghostwriter of Harry Houdini’s Under the Pyramids, an adventurous tale of Houdini’s kidnapping and imprisonment under the pyramids. The Great Sphinx plays a pivotal role in this supposedly true tale.

Also, the hairless Sphynx cat breed is not from Egypt, but rather is from Toronto, Canada.

Trajan: the man, the column, the typeface

The Roman emperor Trajan’s military victories led to a triumphal column in his honor. The typography of the column led to a font also named in his honor.

Born in 53 CE in the modern day province of Seville Spain, Trajan was the second Roman emperor from the Nerva–Antonine dynasty (which produced the “Five Good Emperors” – including himself). His experience as a Roman general, senator, and governor of upper Germany helped him become emperor Nerva’s choice as his successor.

During his 19 year reign Trajan expanded the Roman Empire to its greatest size to date. As part of this expansion he took the kingdom of Dacia (roughly modern day Romania). One motivation for the conquest was that the Dacian kingdom, unlike other Germanic tribes, was sufficiently organized enough to make alliances with other nations, making it a threat to the Romans. Another motivation was money. After the conquest the Romans took control of the gold and salt mines of Dacia, using the proceeds to pay for public works projects back in Rome.

To celebrate this lucrative victory over Dacia the Roman Senate had a column constructed in Trajan’s honor, which leads to …

Trajan’s victory over the Dacians was commemorated / propagandized with Trajan’s Column.

Trajan’s Column

Completed during Trajan’s lifetime in 113 CE, Trajan’s Column is a 98 foot tall marble column that commemorates / propagandizes Rome’s victory in the Dacian Wars. With an estimated total weight of over 1,000 tons it’s an impressive feat of artistry and engineering. As it spirals upwards it features 2,662 figures (Trajan being 58 of them) and 155 scenes in relief that tell the story of the conquest. National Geographic has an interactive graphic that does an incredible job guiding you up the column but plaster cast recreations of the relief exist in several museums around the world as well.

The column is also a tower – there is a circular staircase inside that takes you to the top. The top of the column used to (logically) have a statue of Trajan, but the statue went missing sometime in the Middle Ages and today St. Peter stands atop the tower.

The column / tower is also a tomb. After Trajan died in 117 CE his ashes were buried in a chamber at the base of the column. The ashes of Trajan’s wife Plotina were added a few years later. On the exterior of the base above the doorway to the burial chamber is an inscription to Trajan. More interesting that what the inscription says is how it says it. The beautiful letter forms of the typography became inspiration for letter artists and designers, which leads to …

The letterforms on Trajan’s Column inspired the font Trajan.

Trajan the Typeface

Trajan the typeface was created in 1989 by Carol Twombly for Adobe. She used the very old lettering on Trajan’s column as inspiration for a very new typeface. The letter forms found on Trajan’s column are known as Roman square capitals which are the basis for our uppercase letters. Roman square capital letters were used primarily for engravings and can be found around ancient Roman sites (the Pantheon, the Arch of Titus, etc).

Trajan has appeared on many many movie posters.
Starting in the early 1990s, Trajan has appeared on many many movie posters.

From its debut in 1989 Trajan quickly became a very popular typeface and particularly for movies. Its first movie poster appearance was 1991’s At Play in the Fields of the Lord. In the early ‘90s it was thee typeface for dramatic films but spread to appearing across genres. Eventually the movie poster/packaging market was so saturated with Trajan that more serious films began to use other typefaces and so Trajan shifted to only really appearing in horror movies, B-movies, and straight-to-video movies. Trajan’s elegant letter forms were being employed to add gravitas to movies that might not be so great.

In less than a decade (less time than Trajan the man ruled the Roman Empire) Trajan the typeface rose and fell in popularity. You still see it from time to time – some new movies use Trajan, some politicians use it much like politicians did a few thousand years ago – but Trajan no longer rules like it once did in the 90s or the 1990s.

A tour to the top of Trajan’s Column.

Learn more about Trajan’s rise & fall of being the serious movie typeface.

the Molly Maguires

The American Irish secret society for labor rights that might not have actually existed.

Late 19th century Pennsylvania was a hotbed of coal mining. The northeastern part of the state was home to dozens of mines extracting anthracite coal (the region is still home to the largest concentrated anthracite deposit in the world). Anthracite coal had become the fuel of choice to heat homes as well as power the American Industrial Revolution. Extracting tens of millions of tons of coal was a dangerous, dirty, and demanding job which fell upon poor immigrants from Europe.

Among these recent immigrants were the Irish. Emigrating in large part to escape the famine of the mid-19th century many Irish settled in Pennsylvania. Trading one disaster for another, by the 1870s the Irish in the coal mines were risking their lives for poverty wages. The remote locations of many of these mines meant workers and their families were tethered to the mines and were rarely in a position to break the cycle of poverty. Children began their mining careers separating slate from coal as “breaker boys”, eventually graduating to working down in the mines.

coal mines
Mining coal was an incredibly dangerous job. Between the long term health problems and the short term concerns of fires and cave-ins, the dangers of the mines were constant and endured for poverty wages.

Deaths were common. In September of 1869 a fire at the Avondale Mine killed 110 coal miners. In Schuylkill County 566 miners were killed over a seven year period. The situation has been summarized as, “Wages were low, working conditions were atrocious, and deaths and serious injuries numbered in the hundreds each year.” The English & Welsh immigrants at the mines tended to be given management positions which frequently meant that the power struggles that had played out between the poor Irish and the English & Welsh landowners in Ireland were played out again in the coal mines of Pennsylvania.

the Molly Maguires

In this depression some of the Irish miners decided to fight back. One method was through organized labor strikes, but more famously they retaliated through violence. Between 1862 and 1875 fourteen mining officials were murdered. This gave rise to the conspiracy theory that, within the regional Ancient Order of Hibernians (AOH), was a militant group of Irishmen responsible for these organized killings. This group was called the Molly Maguires.

The Molly Maguires had been a secret society in Ireland who used threats & violence to settle disputes. Pennsylvania journalist Benjamin Bannan was the first to use the name Molly Maguires in association with the violence of the area. As the murders began the name Molly Maguires only became more infamous. That said, while the violence in coal country was real the Molly Maguires may not have been – there is no evidence that the Molly Maguires in America ever existed. Bannon pinned the violence on the Molly Maguires but he had no proof.

the Molly Maguires

the Day of the Rope

Franklin Gowen, president of the Reading Railroad (which also owned dozens of mines), hired the Pinkerton Agency to infiltrate the Irish miners. The Pinkertons sent agent James McParland to go undercover and find any evidence of murder plots or other crimes. McParland worked for two and a half years undercover and his information was the primary evidence in the subsequent 1876-1877 trials of nearly 50 alleged Mollies. In a racist miscarriage of justice, the state of Pennsylvania turned over the investigation & prosecution of the alleged Irish criminals to the Reading Railroad company. Company president Gowen even served as one of the prosecutors. No Irish Catholics were chosen to be jurors, most jurors were Pennsylvania Dutch, some of whom didn’t speak English. There was very little evidence and the prosecution’s cases largely hung on establishing that the defendants were members of the AOH and the conspiracy theory that the AOH was a front for the unproven Molly Maguires.

By 1879, 20 alleged Molly Maguires had been found guilty and executed (all of whom were Irish Catholics) while 23 more had been sent to prison. Following these executions all supposed Molly Maguire activity ended, not because all of the guilty parties had been executed, but more likely from fear of retaliation by the mining companies.

Added info: of the executed, one particularly curious case is that of Alex Campbell. As legend has it, before being taken to the gallows he protested his innocence and placed his hand to the wall of his prison cell stating “There is proof of my words. That mark of mine will never be wiped out. It will remain forever to shame the county for hanging an innocent man.” The cell has been cleaned and painted many times, but the handprint can still be seen in his cell at Carbon County Jail in present day Jim Thorpe, PA.

Reading Railroad president and Molly Maguire prosecutor Franklin Gowen later died by gunshot to the head in 1889. Conspiracy theorists question if it was suicide or retaliation by the Molly Maguires.

One final item, the 1915 Sherlock Holmes story The Valley of Fear was inspired by the Molly Maguires. The story’s second half is about a mining town with a largely Irish fraternal organization which has a secret second organization fighting against the mine bosses on the behalf of union laborers. The story even has a Pinkerton agent infiltrating “the Scowrers.”

In Search of History made a great documentary on the Molly Maguires.

The Molly Maguires have become a part of Irish American culture & history thanks in part to both The Dubliners song and the 1970 movie of the same name.

The Mummy’s Curse

The idea that Egyptian tombs are cursed as a means of protection is largely a 20th century creation.

Egyptians began mummifying their dead around 3500 BCE. In all the years of archaeological explorations of Egyptian tombs very few written or inscribed “curses” have ever been found. Those that have however could be thought of as early security systems – trying to protect the contents of the tomb from grave robbers (both amateur and archaeological). Unfortunately, given centuries of rampant looting of Egyptian graves it’s safe to say the curses didn’t work. Despite so few curses having ever been found our modern pop culture is firmly gripped by the undead idea of cursed tombs with mummified Egyptians exacting their revenge from beyond the grave.

The primary reason we think of cursed Egyptian tombs is the 1922 excavation of the tomb of Tutankhamun. George Herbert, 5th Earl of Carnarvon, had financed the search for King Tut’s tomb which was run by archaeologist Howard Carter. As Tutankhamun’s tomb was discovered, and the magnitude of the discovery was realized, the Egyptian government ensured that all artifacts would stay in Egypt. Without being able to sell any of the treasures (to cover his costs … or to make a profit) Lord Carnavon sold the exclusive rights to the excavation story to the London Times for £5,000 up front as well as 75% of the Times’ profits from sales of the story to other papers. This left every other news outlet high & dry for a story. Enter: the mummy’s curse.

Howard Carter's discovery of King Tut's tomb changed how we think of Egyptian mummies
Howard Carter’s 1922 discovery of Tutankhamun’s tomb changed the world (and how we think of Egyptian mummies).

From Beyond

Without access to the largest archaeological find of the age, all other news organizations were left scrambling for another angle. Less than six months after the discovery, on April 5th 1923 Lord Carnarvon died and the press had their angle. The media began to report on a supposed Egyptian curse that had killed Carnarvon for opening King Tut’s tomb (despite no such curse being written anywhere in the tomb).

Paranormal “experts” crawled out of the woodwork to substantiate the idea of a curse. Archaeologists (especially the ones who were excluded from the tomb) where willing to discuss potential curses, which allowed them to profit from the find. Rumors and claims spread & grew like wildfire. Even Howard Carter let the reports of a curse continue, never publicly denying them, because (like a Scooby-Doo episode) it had the effect of scaring people away from the tomb, allowing him to continue working on the excavation for the next decade in relative peace.

The association of mummies with curses proliferated across pop culture after the discovery of King Tut’s tomb.

I want my mummy

The western fascination with Egypt, and the orientalized ideas that it was an exotic land of magic, has existed since at least the Middle Ages. Using ground up mummies as medicine, or turning them into paint pigment, had long been practiced by Europeans. The 19th century Egyptomania craze popularized Egypt as a setting for fantastical stories of mummies and tombs. The first story featuring a reanimated mummy (a trope most later mummy stories would follow) was 1827’s The Mummy! A Tale of the Twenty-Second Century by Jane Webb. Bram Stoker’s 1903 horror novel The Jewel of Seven Stars also features Egyptian magic and resurrection. However the discover of King Tut’s tomb did more to popularize the idea of a mummy’s curse than anything else.

In 1932, not long after the 1922 discovery of King Tut’s tomb, the film The Mummy was released featuring Boris Karloff as an ancient mummy resurrected. This was the beginning of many, many mummy movies (including 1944’s The Mummy’s Curse and 1957’s Pharaoh’s Curse, both of which have a curse right in the title). Each telling of a mummy story wanted to be better or more fantastical than the previous so the idea of a mummy’s curse grew. Today the idea of resurrected mummies & curses is a standard part of the horror genre.

Added info: As for any idea that Lord Carnarvon’s death might be attributed to some kind of curse, there is no evidence to support this. While a few people associated with the excavation of the tomb of Tutankhamun died not long after its discovery, nobody died at an unusually young age. An epidemiological study of the people who entered the tomb found that these individuals died on average around 70 years old, which was normal for the early 20th century.

Also, Lord Carnarvon’s home was Highclere Castle, which today is the setting of Downton Abbey.

New England Vampires and Tuberculosis

The effects of tuberculosis led some 19th century New Englanders to believe that vampires were preying on the living.

In the late 18th and much of the 19th century there was a vampire panic in New England. People across New England feared that vampire-like creatures, using some kind of sympathetic magic, were slowly killing their friends & family from inside the grave (as opposed to traditional vampires who rise from the grave to attack). People would exhume their family members, look for the one who might be a vampire, and take various precautions to stop them. New Englanders might remove & burn the heart of a suspected vampire, they may turn the skeleton over facedown, decapitate the head, put a brick in their mouth, or use a wooden stake to pin their relative to the ground among other methods.

This panic was more than just a few isolated incidents. Henry David Thoreau mentions attending an exhumation in his journal on September 26, 1859. In February of 1793 over 500 people attended the ceremonial burning of the heart, liver, and lungs of supposed vampire Rachel Harris in Manchester, Vermont. After Nancy Young died in 1827 Rhode Island, her father thought that she might be preying on her still alive little sister Almira. The family exhumed Nancy’s coffin, burned it on a pyre, and stood in the smoke to breath in the vapors thinking it would free/cure them of this affliction – it did not work and Almira and two more of her siblings later died. Digesting the cremated remains of a suspected vampire, or breathing in the smoke of the cremation pyre, were not uncommon last resort treatments after traditional medicine had failed.

The 1892 exhuming of suspected vampire Mercy Brown in Exeter, Rhode Island became an international story – Bram Stoker based part of the Lucy character in Dracula on Mercy Brown. With 18 confirmed vampire cases, Rhode Island even become known as the “Vampire Capital of America.” The reason all of this happened was twofold: tuberculosis and decomposition.

The story of Mercy Brown influenced Bram Stoker’s Dracula.

Wasting away

Tuberculosis is an airborne disease that attacks the lungs (among other areas). Active tuberculosis kills about half of those infected and in 2018 it was the ninth leading cause of death worldwide (killing more people than Malaria or HIV/AIDS). In 19th century New England tuberculosis was the leading cause of death, killing an estimated 25% of the population.

Tuberculosis can develop over months or even years, slowly eating away at someone. A person with active TB develops a chronic cough as their lung tissue breaks down, their mucus starts to contain blood, they develop fevers, night sweats, and lose weight. Because of the weight loss the disease has been historically known as “consumption.” As the infected person wastes away they also develop ashen skin, giving them an overall sickly drained appearance.

Vampires, or, a lack of scientific understanding

The effect of tuberculosis (the slow draining of life) combined with some of the infected saying their deceased relatives were visiting them (as Almira Young claimed), was enough for some New Englanders to suspect there were vampires at work. Bodies of suspected vampires were exhumed to looks for signs of vampirism. Some of the corpses seemed have grown longer finger nails and longer hair, some were bloated, some had blood in their organs, while others seemed to have not decayed at all. These were surefire signs of a vampire … or were just normal aspects of body decomposition.

As bodies decay they become dehydrated causing the skin to recede and shrink. This gives the illusion of longer fingernails & hair as the base of the nails and hair that was once under the skin is now exposed. The bodies that seemed to have not decayed at all were the ones of people who died in the cold winters of New England (as was Mercy Brown’s case who had died in January) where the cold slows the decomposition process. These unremarkable signs of decomposition were mistaken as proof of life after death to the untrained eyes of 19th century New England.

The dawn of a new era

The Mercy Brown story brought unwanted attention to New England. It was embarrassing that, while the light bulb was being invented and Henry Ford was building his first car, people were worried about folklorish undead monsters. The vampire panic rose and fell with the tuberculosis endemic of New England. Over time with advancements in science, and the dissemination of knowledge, belief in vampires faded away.

Added info: porphyria is another disease whose symptoms can be similar to vampire activity. It’s a liver disease that, for some, can cause sensitivity to sunlight (leading some to only come out at night) as well as sensitivity to garlic.

“Ask a Mortician” goes through the history of the New England vampire panic and the realities of tuberculosis in 19th century New England.

A crash course on tuberculosis.

“Back in my day …”

The idea that “… the kids of today aren’t as good as when I was a kid …”, has been around for thousands of years.

Generation Y, more commonly referred to as “Millennials”, are people born between 1981 and 1996 (but these years vary). Hot take think pieces and “news” stories like to malign millenials as lazy, entitled, and self-obsessed. The general narrative is that this younger generation is not as disciplined as the hard-working older generations. This is frequently accompanied by a “things were better when I was younger” mindset. While millennials have been recent targets of this kind of criticism, this kind of criticism is nothing new.

From Hesiod to Baby Boomers

Adults have been complaining about the up-and-coming younger generation for as long as there have been people. One of the earliest examples is by the classical Greek writer Hesiod who, around the 8th century BCE, wrote “I see no hope for the future of our people if they are dependent on the frivolous youth of today, for certainly all youth are reckless beyond words.” A few centuries later Aristotle echoed this idea when he said of younger people, “They think they know everything, and are always quite sure about it.”

The song remains the same

This kind of thinking is reductive and condescending – it says more about the out of touch nature of the people doing the criticizing than the younger generation being criticized. Despite thousands of years of older people complaining about younger people, civilization has somehow managed to evolve & progress.

People don’t change that much from generation to generation and no generation is a cultural monolith. Every generation has hard workers, selfless givers, narcissists, the lazy, the good, the bad, and everything in between. Shakespeare continues to be relevant because the fundamental human condition has changed very little over the centuries.

a collection of magazine covers
While the generations change the way of thinking has not. The Time magazine story about Millenials was so ridiculed it became a meme.

The kids are alright

Myths that millennials eat avocado toast all the time, that they fail to save for retirement, that they’re lazy, that they’re all socialists, etc. have all been debunked. After criticizing and blaming millennials for a variety of society’s problems baby boomers seemed surprised and insulted by the “audacious”, terse, and somewhat snarky millennial reply of “OK boomer”. Meanwhile these same baby boomers seem to have forgotten that they were once the subject of the very same kinds of insults by the generations older than them.

As for narcissism, younger people of every generation tend to be more narcissistic but become less so as they age – the older people who are currently less narcissistic didn’t start out that way. Our values also change as we age. Despite being on the receiving end of this criticism the younger people of today will become the older people of tomorrow and will inevitably forget what they were like when they were young. They’ll judge younger generations by their present mindsets and not by the attitudes they held back when they were that age. The more things change, the more things stay the same.

Added info: while the sentiment is correct, there is a popular misattributed quote that makes the rounds on the internet that “The children now love luxury. They have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise.” This is frequently attributed to Socrates, or sometimes Plato, but it’s by Kenneth John Freeman in 1907.

Fiji Mermaid

The taxidermy oddity that attracted thousands of people to P.T. Barnum’s American Museum.

In 1841 P.T. Barnum opened his American Museum in New York City. For 31 years the museum had been Scudder’s American Museum which was part science museum, part zoo, part history museum, and part collection of oddities. After Barnum bought it he took these ideas and amped them up to become one of the most popular attractions in America. With around 500,000 items in the collection the museum was both educational and entertaining – it was history and spectacle. Over its 14 year run the Barnum American Museum had 38 million customers at a time when the population of the US was only around 32 million.

Being a P.T. Barnum enterprise, marketing was a critical tool to its success. He transformed the facade of the building into a giant billboard for the museum itself. He had posters advertising (and exaggerating) the attractions inside. One of the first attractions he marketed, using most of the front of the building to do so, was the Fiji mermaid.

Barnum’s American Museum was one of the most popular attractions in America at the time.

The Little Mermaid

The Fiji mermaid was brought to America in 1842 by Dr. J. Griffin of the British Lyceum of Natural History. It was the mummified remains of a mermaid from the Fiji islands in the South Pacific. Barnum generated interest in the mermaid by sending anonymous letters to various newspapers talking about it. He even cooked up a story that he was trying to convince Dr. Griffin to exhibit the mermaid and that Griffin was reluctant. It was a sensation before it was ever even exhibited to the public.

Barnum negotiated to display the mermaid for one week but it proved to be so popular that it went on the road, touring southern states. Dr. Griffin gave lectures about mermaids and cited the ancient Greek idea that everything on land had a counterpart in the sea. At a time when new species were being discovered in the remote areas of the world perhaps a mermaid had finally been found.

Eventually the Fiji mermaid split its time between Barnum’s American Museum and the Boston Museum. Its fate is unknown as it went missing but it was most likely destroyed in either the fire that consumed Barnum’s museum in 1865 or the fire that consumed the Boston Museum in 1880.

The Fiji mermaid has become one of Barnum’s most famous humbugs (ie. hoaxes). It looked nothing like the beautiful mermaids in the advertisements.

A sucker born every minute

In truth, the “mermaid” was Barnum’s first hoax at his American Museum (his very first hoax was when he exhibited Joice Heth, a woman he bought, and claimed she had been George Washington’s former nurse … which she hadn’t been). At about 3ft long the mermaid was the taxidermy combination of a monkey torso and the tail of a fish (most likely a salmon). Far from being the beautiful humanoid mermaid seen in Barnum’s advertisements, it was a ghastly animal mashup. The Charleston Courier wrote that “… the Feejee lady is the very incarnation of ugliness.”

Instead of originating in the Fiji islands, the mermaid actually was one of many created by Japanese fishermen. This particular mermaid was bought by the American sea captain Samuel Edes in 1822 whose son sold it to Moses Kimball of Boston in 1842. Kimball then leased the mermaid to Barnum for his museum. As for Dr. J. Griffin, he was actually Barnum’s associate Levi Lyman who was in on the ruse from the very beginning, pretending to vouch for the mermaid’s authenticity. Also there’s no such thing as the “British Lyceum of Natural History”. Nothing about the Fiji mermaid was real except the public’s excitement.

Humbug

There is a Barnum-esque blurry gray area between “hoax” and “entertaining joke”. While Barnum liked to categorize things like the Fiji mermaid as “humbugs” (which are things designed to deceive), he felt they were always in playful fun. Barnum wanted the audience, even when deceived, to still have a good time. He did not like deception at the expense of the public. For example he spoke out publicly (and testified in court) against spiritual mediums who tricked people out of money, lying to them about communicating with deceased loved ones.

Over the years numerous other Fiji mermaids have made the rounds in museums, curiosity shops, sideshows, and private collections. They’re made from all manner of materials (animal parts, wood, papier-mâché, wire, plastic, etc). You can find higher-quality ones for sale in shops that specialize in curious objects, but there are also cheaper ones on ebay. You can also learn to build your own.

Added info: The Jenny Haniver is a related taxidermy hoax. It’s a sea animal, frequently a ray or skate, that’s been modified to look like the mummified remains of a demon, angel, basilisk, etc.

Also, P.T. Barnum never said “There’s a sucker born every minute.” It was said by banker David Hannum who had purchased a hoax giant which he charged the public to see.

Barnum Museum curator Adrienne Saint-Pierre discusses the Fiji mermaid.

Learn some tips & tricks to building your own Fiji mermaid.

In the X-Files episode “Humbug” Agent Scully enters a curiosity shop where the Fiji mermaid gets mentioned. The owner of the shop also has a clever humbug of his own in the style of Barnum’s famous signage leading people to the Egress.

Lawns

Beautiful, orderly, ecological problems.

It used to be that, if you owned land, you used it to grow plants for some kind of profit (food, timber, fabric, etc.). Decorative manicured grounds have no monetary value. To keep a grassy lawn was a sign of wealth – it was a status symbol that you had so much money you could use some of your land for pure ornamentation. Beyond being a “waste of space”, you also had to pay for people to maintain the lawn, making it even more expensive.

Our modern idea of a meticulously manicured grassy lawn has its roots in 18th century European aristocracy. While earlier palaces featured intensely manicured gardens with topiaries and geometric lines (such as the Palace of Versailles), 18th century English garden design drew inspiration from the pastoral landscapes of Italian paintings. This new style featured wide open spaces that, while manicured, looked more natural. For example, some estates used ha-ha walls as barriers to keep grazing animals away from the house while offering the illusion of an uninterrupted natural view of the grounds.

As for the upkeep, grazing animals were sometimes used to maintain the lawn in the distance (and were a visual addition to the “natural” scene) but the areas closest to the house were tended to by men using hand tools. Even after the invention of the lawn mower in 1830, which helped increase the number of grassy lawns, these trimmed green fields were found primarily around the homes of the wealthy.

Imported Grass

17th century colonists arriving in North America were generally preoccupied with trying to stay alive and didn’t have the time for decorative lawns. They were also missing the grass itself. The East Coast lacked the types of grasses necessary to turn into lawns. What’s worse is that these were the kinds of grasses that best served as food for the colonists’ grazing animals. As such the animals over grazed the native available plants, eventually turning in desperation to eating poisonous plants (to their detriment).

To solve this problem colonists began to import grass from Europe for their cows, sheep, etc. This is how many of the grasses that are so common in America got here. For example Kentucky bluegrass, one of the most popular grasses in America, is a non-native/invasive species and was imported from Europe.

Suburban America

As settlers spread around North America so too did grass. Throughout the 19th century as people became more established, grassy lawns slowly became a feature of homes and parks. After the Civil War the more prosperous northern states adopted lawns sooner than southern states. Public parks and cemeteries increased the popularity of grassy lawns. Landscape architect Frederick Law Olmsted designed one of the earliest suburbs in 1868 with his plans for Riverside, Illinois. He set the homes back 30ft from the street and placed grassy lawns out front. What really democratized lawns however was the housing boom in the mid 20th-century.

With the 1944 G.I. Bill millions of veterans were able to receive home loans which helped them buy homes and move to the suburbs. Abe Levitt, who created Levittowns, said that “A fine lawn makes a frame for a dwelling …”. Millions of homes were suddenly being created with millions of lawns. As so many families were becoming home owners lawns became less about economic status and more about cultural conformity. A well-maintained lawn was the sign of a good neighbor, and an unkempt lawn was subversive. Lawn care became big business and articles about lawn care surged in post-war America. With color TV more people could watch professional sports (especially golf) and see what was possible for their own lawns.

The Wasteland

Today there is an estimated 40 million acres of grass in America. Grass is America’s greatest crop all while being (generally) inedible – lawns serve almost no functional purpose other than looking nice. Cutting grass regularly encourages it to spread out, edging out other plants and reducing biodiversity. Interestingly more affluent homes which can afford the time & money needed for a more manicured lawn actually have lower biodiversity than lower-income homes. The nicest looking lawns are, paradoxically, the worst for the environment.

As for carbon emissions grass is a carbon sink (which is a good thing), meaning it captures carbon emissions and stores it in its roots. Unfortunately the act of mowing the lawn contributes far more carbon dioxide than is captured. Gas powered lawn equipment produce more air pollution than cars over comparable periods of time (For example: the air pollution of 1 hour of mowing equals around 100 miles of driving). Lawn mowers account for around 5% of America’s air pollution. Having and maintaining a lawn ultimately produces more dangerous carbon dioxide than it captures. Further, lawn equipment in America uses around 800 million gallons of gasoline annually of which about 17 million gallons are spilled and never even used.

Homeowners use 10 times the amount of pesticides and fertilizers per acre than farmers, and many of these chemicals find their way into the water supply. Watering these lawns uses 30-60% of urban fresh water – all for a crop that isn’t eaten and just sits there.

Go Native

An alternative to lawns are trees or other native plants that require less maintenance (less gas powered machines) and improve biodiversity. Native plants are better for butterflies, bees, and other helpful insects. This in turn is better for birds and other animals. Planting native plants, not using pesticides, reducing the size of your grass lawn, etc. creates a healthier and more bird friendly yard. Break free of the conformist thinking that you must have a green carpet around your house.