Part of the survival strategy of cicadas is to emerge in prime number intervals.
There are thousands of species of cicadas. As nymphs they live most of their lives underground, only to emerge when they are ready to transform into adults, sing, mate, and die. Their time above ground is about a month.
Broadly speaking cicadas can be divided into two groups: • Annual cicadas: those with relatively short life cycles, some of which appear every year • Periodical cicadas: those that live underground for over a decade and only come above ground in synchronized intervals
Periodical cicadas are found only in the eastern areas of North America. Instead of a few here and a few there coming above ground every year, periodical cicadas (divided up into 15 geographic broods) appear all together at designated intervals. Their synchronized appearances, every 13 years or every 17 years, is what makes them remarkable.
Prime Number Survival
Part of the survival strategy of the periodical cicadas is that they appear all together. Millions to billions of cicadas all emerging in the same short window of time ensures that, while many will be killed by predators, the majority will survive to continue the species. Simply put, there are so many cicadas appearing all at once that predators can’t eat them fast enough – a survival concept known as predator satiation.
Predator satiation is a numbers game. It only works in large numbers relative to the number of predators. For a brood of periodical cicadas this means they have to be synchronized and appear all at the same time otherwise their numbers might be too low, too many may be eaten, and they could die off.
What a brood of periodical cicadas doesn’t want is another brood appearing at the same time. For one thing they would be competing for resources. What’s worse is if a 13 year brood and a 17 year brood interbreed then the inner clocks of their offspring may become confused. The result could ruin the synchronized timing of their appearances which they need for predator satiation. Their survival depends on avoiding other broods of cicadas. Enter, prime numbers.
Prime numbers are numbers only divisible by themselves and 1. The periodical cicadas of North America appear in prime number intervals of either 13 or 17 years. If you create a list of years, and mark every 13 years as well as every 17 years, they rarely overlap. In fact, 13 year cicadas and 17 year cicadas only overlap every 221 years. If they appeared in composite number intervals, (4, 6, 8, 9, etc) they would overlap constantly and most likely die out. Through evolution, the periodical cicadas that used prime numbers have survived.
The exact origin of abracadabra is unknown but what is known is, before its modern usage by stage magicians, it was used as a real magical incantation. The earliest documented instance is the 2nd century medical text Liber Medicinalis by Serenus Sammonicus. As physician to the the Roman emperor Caracalla, Sammonicus prescribed wearing an amulet with the word abracadabra written on it to cure malaria.
Abracadabra’s use in healing magic may have to do with its possible etymologies. One possibility is that it comes from the Hebrew “ebrah k’dabri” or “I will create as I speak”. Or it may have come from “Abraxas” the mystical word/god from the Gnostic belief system. One language it’s not from is Aramaic (which the internet likes to say it is). Often quoted as coming from “Abra Kadabra” meaning “May the thing be destroyed”, this false Aramaic etymology became a popular internet “factoid” because J.K. Rowling used it as the basis for her “Avada Kedavra” spell in the Harry Potter series (a spell that does not cure malaria … or anything else).
Abracadabra became a popular protective magical word to cure a variety of ills. One application was to write abracadabra out 11 times but each time removing the new last letter, forming a triangle pointing down. This could be written on parchment and worn around the neck, or carved into a pendant of some kind, but the idea was the same – you used the word to summon protective spirits. As you worked your way down, abracadabra would disappear and hopefully so would your illness.
From Real Magic to Stage “Magic”
Over the millennia, as our scientific knowledge grew, we learned more about medicine and our belief in magic diminished. In general we no longer rely on magic to cure/protect us from the unknown. Our scientific understanding of the world leaves little room for magic; in a similar way to how we no longer have sea monsters on our maps. Magic went from being a highly-regarded area of study, to fun entertaining tricks illusions with rabbits in hats, decks of cards, sleight of hand, magic wands, etc. Similarly, abracadabra went from being a real magic word to being a performative word for stage magicians.
Early adopters of Parisian fashion helped make smallpox inoculations popular.
Inoculation is when you purposefully give someone an “antigenic substance” (a substance that triggers an immune response) to generate antibodies and help develop immunity to a particular disease. Around 1500 CE the Chinese developed a practice of inhaling a powder made from ground up smallpox crusts. By ingesting a less harmful version of the disease their immune systems could learn to fight the real thing. The Ethiopians and the Turks had a similar but different practice. They would make a small incision in the arm and place a piece of smallpox pustule inside, with the same goal of triggering an immune response and hopefully developing immunity.
Lady Mary Wortley Montagu of England saw the Turkish method while her husband was ambassador to the Ottoman Empire. She brought the technique to Western Europe and had her daughter inoculated in 1721. Despite evidence of success, westerners were skeptical of smallpox inoculations. When the Turkish procedure was done incorrectly the patient could get full-blown smallpox which has a fatality rate around 30% (or higher in children). Inoculations were an especially difficult sell in France, until smallpox killed King Louis XV and 10 of his courtiers in 1774.
After the death of Louis XV, a nineteen year old Louis XVI was suddenly very motivated to get inoculated (additionally encouraged by his wife, Marie Antoinette, who had previously been inoculated back home in Austria). Soon others in the French royal court chose to follow suit. The royal court getting inoculated helped make the procedure more acceptable but what really helped was Mary Antoinette’s hair.
To celebrate the king’s inoculation Antoinette had a special gravity-defying pouf hair style constructed, the pouf à l’inoculation. The inoculation pouf featured a rising sun representing the king, an olive tree representing peace, and the rod of asclepius representing medicine. Soon other women wanted the same trendy hair style as the queen, and as the pouf à l’inoculation became popular around Paris so too did smallpox inoculations. An inoculation is a fairly invisible procedure but a spectacular hair style was a walking billboard celebrating that you had been successfully inoculated.
In his 1962 book Diffusion of Innovations, Dr. Everett Rogers theorizes how and why innovative ideas/products are adopted (or rejected). After the initial stage where innovators introduce a new product, the early adopters evaluate if it’s worthwhile. Sometimes called “lighthouse customers”, early adopters serve as messengers & guides, communicating the values of a new product to others. While members of each stage of the innovation adoption lifecycle require their own marketing strategy, a key to the early majority adopting a new product is the approval of the early adopters. Once early adopters give the thumbs up, the early majority accept the new product and success is all but inevitable.
The queen’s hairstyle influenced the royal courtiers, who influenced the bourgeoisie, who in turn influenced the population at large. Smallpox inoculation was an unknown, scary, and seemingly counter-intuitive procedure, but it was made fashionable (desirable even) through early adopters celebrating it. By making medicine a cool status symbol people everywhere wanted it.
Added info: While it’s fairly well known that Mary Antoinette never said“Let them eat cake”, and that “cake” in this case meant a form of bread, she was still unfairly vilified. Overall she seems to have been a decent queen (as monarchs go), but she did live a wildly extravagant lifestyle which certainly made her seem detached from the struggles of the common people.
Pure white cats only make up about 5% of the overall cat population. Of these white cats however, 72% are deaf. In cat genetics, the gene that gives a pure white cat its white fur is also linked to the development of its ears and eyes. This is especially true of white cats with blue eyes.
This is only true of pure white cats, those with entirely white fur. Cat’s that are mostly white but have colored markings, pointed patterns, etc. don’t count as being pure white cats and there is no connection between their eye color and their hearing. Siamese cats for instance are mostly white but they aren’t pure white cats and so there is no genetic relationship between their blue eyes and their hearing.
Added info: Interesting side fact, the pigmentation in the coat of Siamese cat is heat sensitive and changes color based on the temperature. The colder parts of their bodies (their extremities: feet, ears, nose, tail) are usually darker while the warmer parts of their bodies are lighter. If you send a Siamese cat outside in the cold winter months they will get darker, and upon returning to a warm house they will get lighter.
Also, in the 1970 Disney film The Aristocats the mother cat Duchess and her kitten daughter Marie beat the odds and are both white cats, with blue eyes, but normal hearing.
How one asymptomatic woman spread typhoid to dozens of people and raised a host of bioethical questions.
Mary Mallon was born in Cookstown, County Tyrone in Ireland in 1869. She emigrated to New York City when she was 15 and worked her way up through the servant ranks to the highly respectable position of cook. Over the years she ran the kitchens & cooked for various families around the city. In the summer of 1906 she was the cook for the Warren family (Charles Warren, banker to the Vanderbilts) as they vacationed in a rental house in the very upscale Oyster Bay, Long Island.
Over the course of that summer, 6 members of the household got sick with typhoid. No one else in Oyster Bay contracted the disease, a disease typically associated with the poor. Concerned for the reputation of the rental house, the owner of the home knew the source of the typhoid had to be found or it would be difficult to ever rent the home again. George Soper, a freelance civil engineer, was hired to investigate the source and he traced it back to the Warren family’s former cook, Mary Mallon.
Typhoid, or more formally typhoid fever, is a form of salmonella (a bacteria) that can spread through tainted water or food that has come into contact with fecal matter. You find it in places with poor hygiene and poor sanitation, which is why it is generally associated with the poor.
New York City in the early 20th century was a much dirtier place than today. The population of the city was doubling every decade. The tenement housing of Manhattan’s Lower East Side was an overcrowded jungle of people and it was common for a family of 10 to live in a 325 square foot apartment. Add to the mix the 150,000 – 200,000 horses of the city, each of which created about 25 pounds of manure a day. It all led to filthy conditions that were ideal for typhoid and other bacterial diseases.
Soper tracked down Mary and he documented a trail of typhoid in her wake. Over 10 years Mary worked for 8 different New York families, 6 of those families contracted typhoid and 1 person died. Despite this evidence Mary was adamant that she never had typhoid and she never felt sick. She was partially right.
It turned out that she was a “healthy carrier” of typhoid, someone who had the disease but never really felt sick. She was asymptomatic and went about her life unaware that she even had the disease, let alone that she was spreading it to other people (not unlike asymptomatic carriers of COVID-19).
Eventually she was forced against her will into quarantine by the New York City Health Department. In 1907 she was sent to North Brother Island in the East River which was being used as a quarantine center for people sick with infectious diseases. She remained there for 3 years, during which time her story of forced quarantine made it into the papers where she was dubbed “Typhoid Mary”.
In 1910 she was released from quarantine on the condition that she would never work as a cook again, since she had most likely transmitted typhoid through the food she prepared. She kept to this agreement for a while, working as a laundress, but eventually she disappeared from public health officials and started work as a cook again under assumed names. The pay and working conditions of a laundress were far below that of a cook for a wealthy family. She was caught working at Sloane Hospital for Women after an outbreak of typhoid infected 25 people and killed 2. She was sent back to North Brother Island where she lived until she died in 1938 at the age of 69 (still carrying typhoid).
Mary Mallon’s legacy is one of bioethical questions. In the early 20th century the science of communicable diseases was in its infancy, and Mary’s suspicion of the New York Health Department was not unusual. She felt fine, so how could she be carrying/spreading a deadly disease?
Her quarantining raises ethical questions of how far the government should go to protect the general public. When weighing an individual’s civil liberties against the health of the public which is greater? Despite never being convicted of a crime she was imprisoned on North Brother Island for the safety of the public. Was it more ethical to quarantine her the first time or the second time, or at all? Knowing that other people were also asymptomatic carriers of typhoid why was she kept in isolation for nearly 30 years while others walked free? As a healthy carrier she was an unlucky innocent victim of a disease, but she also chose to go back to cooking which she knew might endanger lives. The questions raised by Typhoid Mary are still relevant today.
Added item: There is a good hour-long documentary by PBS, The Most Dangerous Woman in America, on the story of Mary Mallon. You can also find a bootleg copy of the documentary on YouTube:
An empty gas tank allows water condensation to accumulate and potentially damage your engine.
In Winter, Keep Gas In Your Car
When warm air and cold air come into contact with one another they create condensation. This is how storms work. An “empty” gas tank contains more air than gasoline and when that air is warmer than the colder air outside, condensation can build up inside the tank and drip down to mix with the gasoline.
When water and gasoline mix, the water sinks to the bottom. Among other possible effects, if the weather is cold enough the water can freeze in the fuel line and prevent gasoline from getting to the engine. A frozen fuel line will prevent you from starting your vehicle. This is why you’re supposed to keep your gas tank full in the winter. While not a concern in warmer climates where winters are mild, this can be a considerable problem in environments that experience especially cold winters.
Something that helps combat freezing temperatures is winter blend gasoline. Between summer blend and winter blend, gasoline designed for winter is cheaper but also worse for the environment. However, winter gasoline’s higher volatility allows it to ignite more easily in colder weather. So if you have frozen water in your fuel line, or it’s too cold for the engine in general, any winter blend gas able to reach the engine should at least start your vehicle more easily. Still, if you experience especially frigid winters, you should always keep gas in your tank.
Added info: In areas of extreme cold, where the temperature can regularly go to -15° C (5° F), engines can strain to start and engine fluids can become more viscous. In these regions vehicles are frequently equipped with block heaters, which are aftermarket add-ons that are plugged into an external power source to heat up the engine before starting the car. It is not uncommon in parts of Alaska and northern Canada to see cars with electrical plugs hanging out of the their grills which are attached to block heaters.
Through his medical investigation, Dr. John Snow helped solve how cholera is spread and created a legendary data visualization in the process.
With the Industrial Revolution, London’s population grew enormously. People from the countryside moved to the city for work and for a different life. London became the largest city on Earth. Between 1750 and 1850 it’s estimated that London’s population doubled, from around 1 million to around 2.3 million people. What grew with it was a civil engineering crisis in how to handle so many people in such close quarters. In short: what to do with the filth? By 1850 modern plumbing had not been extended to all parts of the city and specifically the Soho area. People had cesspools in their basements where they would empty their waste. In other places the sewage was emptied into the River Thames, which was also a source of drinking water.
Modern germ theory states that microscopic organisms are responsible for the spread of disease. Before we understood this people believed in the miasma theory which claimed that disease was spread by “bad air”. For centuries people believed that epidemics were being spread by dirty air, they had no knowledge of microorganisms. It’s not entirely misguided. Things that smell bad frequently do, in fact, have disease. So while “bad air” may be a warning sign that disease may be present, it’s not the air itself that causes disease. In mid-19th century London miasma theory was the prevailing scientific theory but some scientists were beginning to doubt its validity.
You Know Something John Snow
Cholera is spread through tainted water or food that has come into contact with fecal matter. Between 1846 to 1860 the world was in a cholera pandemic, and in 1854 there was an outbreak in the Soho district of London. Nobody knew exactly how cholera spread but Dr. John Snow had a theory that it wasn’t miasma. A few years earlier in 1849 he published On the Mode of Communication of Cholera where he laid out a theory that a germ (that had yet to be identified) was responsible for cholera. He believed that cholera was spread by “…the emptying of sewers into the drinking water of the community.” The 1854 outbreak in Soho gave him a chance to prove his theory.
In the first 7 days of the outbreak 10% of the neighborhood died. Like a medical detective Snow began investigating the addresses of the deaths. He spoke to residents of the area, he asked where they got their water from, he took down notes, he looked at the sources of water for that part of London. The thing that was truly groundbreaking was that he visualized his data. He drew a map of the area, he noted the locations of water sources, and he added black bars at the addresses where deaths had occurred.
Unlike a data table, a data visualization has the ability to quickly & easily show trends. With a glance you can see patterns or outliers. You can tell a visual story with numbers. As Snow’s visualization grew he could see that cholera deaths clustered by one water source in particular: the Broad Street pump. He was able to show that other addresses in the area, who had their own private water sources (such as a local workhouse and a brewery) were mostly spared. The workhouse had 18 deaths but all of those individuals had separately gone to drink water from the Broad Street pump. This helped disprove the miasma theory because all of the workers should have gotten sick by the same “bad air”, but they didn’t. He took his findings to the local authorities. They found that the Broad Street pump was near a cholera infected home whose cesspool was leaking into the surrounding soil and infecting the water supply. Authorities removed the handle to the pump and deaths decreased.
the Visualization of Data
To say that John Snow’s cholera map is legendary is not an exaggeration. Anyone with a passing knowledge of data visualization knows about his map. Modern epidemiologists still talk about his work. Snow’s methodical approach to data collection & data visualization influenced public policy and helped London prepare for the next cholera outbreak. It helped disprove miasma theory and advanced the modern germ theory we still use today. His cholera map helped make John Snow the father of modern epidemiology.
You can see the evolution of Snow’s work in today’s COVID-19 reporting. Contact tracing, the mapping of infections, accounting for local public policies regarding masks, tracking superspreader events – it’s all influenced by Snow’s 1854 cholera map.
Added info: Today there is a replica of the water pump where the old one stood, but Broad Street is now called Broadwick Street. The pump sits just outside of the John Snow pub.
Belief in conspiracy theories comes from a desire to make sense of complex or troubling events. They try to reduce the anxiety and confusion generated by things that are hard to understand and/or don’t fit with one’s world view.
The Jews of Medieval Europe were often believed to have committed a variety of nefarious plots. From being responsible for the death of Jesus, to poisoning water wells during the Black Death, to a sinister association with money (which serves as a foundation for later conspiracy theories), the Jews have been victims of conspiracy theories for thousands of years. But why conspiracy theories are so attractive to so many people is complicated.
Out of Control
Conspiracy theories are a way of making sense of events that are hard to understand. Humans dislike uncertainty, so having an explanation (however flawed) is more attractive than doubt. Uncertainty generates anxiety and stressful times actually increase the number of people turning to conspiracy theories as a way to alleviate their anxiety.
For example, there are numerous conspiracy theories surrounding coronavirus – it was engineered by the Chinese government, or it was engineered by Bill Gates, or it’s being spread by 5G cell phone towers. The 1889 global influenza pandemic (the “Russian flu”) was blamed on electric lights, telegraph poles, and even just electricity in general. What’s old is new again. People are afraid of a deadly virus that isn’t fully understood and so they blame a new technology that they also don’t fully understand.
The world is largely out of our control and major world events can remind us of how little control we have. Conspiracy theories give a feeling of control to people who feel anxious about a situation they can’t control. Many events are the complex result of a confluence of factors, and sometimes things just happen at random. Neither of these make people feel good. Complexity is not the soundbite people want. Instead it is much more attractive to believe in a fictional simplistic narrative where there are clearly defined good guys and bad guys and you can blame the bad guys for what’s happening. People like easy to understand stories rather than complicated chaos. In having a target to blame, a conspiracy theory believer can take action and have some degree of control rather than being powerless to a complicated abstract concept.
Humans are also pattern recognition machines. Unfortunately we also imagine patterns where there are none. Gamblers and sports fans see streaks and patterns where mathematically there is nothing more than normal chance. People who see non-existent patterns in normal life are more likely to believe in conspiracy theories. In conspiracy theories people construct connections and see patterns where there are none in an attempt to create a story that feels better than uncertainty.
On the Inside
While people who believe in conspiracy theories come from all economic levels, genders, political affiliations, and racial backgrounds, there are a few patterns that exist. For one, people who believe in one conspiracy theory are statistically more likely to believe in additional unrelated theories. Also, belief in conspiracy theories is fueled by the anxiety of not understanding why things happen, and the people who are most likely to not understand things are the less educated.
While conspiracy theories range from the small to large, major world events are more likely to be the focus of conspiracy theories because the effects of such events are so impactful. People want big meaningful events to have equally big and meaningful explanations. This is proportionality bias. The JFK assassination is the focus of numerous conspiracy theories, but the attempted assassination of Ronald Reagan is not. Even though both events are similar in nature, for most people nothing really resulted in the failed assassination of Reagan and so a simple explanation was sufficient. For the JFK assassination however, the idea that one deranged person could cause so much chaos wasn’t a big enough answer for such a big event.
On the Inside
Ultimately believing in conspiracy theories is about belief – it is not about facts. People who believe in conspiracy theories have an insular and circular logic that shields them from the real world. Facts that contradict a conspiracy theory are met with suspicion and are thought of as part of the conspiracy. At the same time the absence of proof to support a conspiracy theory can be seen as proof of the conspiracy theory. It’s an echo chamber shielded from reality.
In 2016 Dr. David Grimes created a formula for how long a conspiracy could realistically stay a secret before being exposed to the public. The more people involved, and the more time that passes, the more likely that someone will say something. For example, the moon landing involved around 411,000 NASA employees. As of today it is extremely unlikely that the moon landing was a hoax because it would have meant that almost half a million people were sworn to secrecy and not a single one of them ever let anything slip for decades. Grimes’s formula demonstrates just how unlikely it is for most conspiracy theories to be true. Information wants to be free. But belief in conspiracy theories continues.
Conspiracy theories are contradictorily both known and unknown. The believer has secret knowledge but also lacks any real evidence. That a conspiracy could have been partially leaked but no real evidence is revealed is very unlikely. But believers are not deterred because conspiracy theories aren’t about facts.
In an age of unprecedented access to information some people have sought emotional refuge in baseless fictional narratives. Conspiracy theories are a symptom, but not the cause, of ignorance. It is easier to prevent a conspiracy theory from taking hold than to change someone’s mind once they believe. For those who already believe, psychologists say it is better to treat the root cause of a believer’s ignorance than to try and dissuade them from a particular conspiracy theory.
So whether it’s the suspicion of witches, the Knights Templar, the Freemasons, the Communist red scare, the Protocols of the Elders of Zion, the Illuminati, crop circles, water fluoridation, Area 51, the Royal Family assassinated Princess Diana, 9/11 was an inside job, chemtrails, Obama wasn’t born in America, QAnon, flat earth theory, the deep state, anti-vaxxers, or that coronavirus is being spread by cell phone towers … knowledge from reliable sources and improving critical thinking skills are the best ways to reduce belief in conspiracy theories.
Humans originally had brown eyes until genetic mutations started making variations. No two eyes are identical, not even your own.
Your eye color & design is as unique as your fingerprints. Several gene variations all contribute to giving each of your eyes a particular design and shade of color (or colors plural if you are heterochromatic) that nobody else has. Not even your own two eyes are identical. Originally all humans had dark brown eyes (along with dark brown skin) which helped to reflect some of the harsh rays from the sun. As groups of humans migrated out of Africa and up into Europe, where there are seasons with less sunlight and the land is further away from the direct sunlight of the equator, there was no longer a need for so much protection from the sun’s harmful UV light. This is where the first mutation in human eye color took place. Sometime between 6,000 and 10,000 years ago the first blue-eyed person was born, from which all other blue-eyed people are descendant.
Our skin and hair is colored using the brown pigmentation called melanin. The back of our irises also contains melanin which gives our eyes color. Melanin is brown, and so brown eyes using brown pigmentation is easy to understand. Light enters the iris, the melanin absorbs some wavelengths of light and reflects back out the necessary waves to make the color brown, making brown eyes brown.
You would think then that blue eyes use blue pigmentation, but they don’t. Blue eyes use the same brown pigmentation as brown eyes but in lesser quantities. The other trick is that, since eyes are three dimensional, blue eyes absorb and scatter waves of light differently than brown eyes. The longer light wavelengths (reds & oranges) get broken up inside the eye and only the shorter wavelengths (blues) get reflected back out making blue eyes look blue. This scattering effect of allowing/blocking certain wavelengths is also what gives the sky color.
Brown, blue, gray, hazel, green – all eye colors use some amount of brown melanin combined with various ways of scattering & absorbing light to make whatever color the eyes are. Depending on the severity of their condition, humans with albinism can lack the necessary pigmentation to make their irises as opaque as other people’s. Irises that fail to block excessive light from entering the retina can cause a variety of vision problems including extreme sensitivity to bright light.
What does eye color “do”?
As for any potential purpose, eye color doesn’t “do” much. Unlike other genetic traits which evolution embraced because they helped our chances of survival, eye color variations seems to be largely perpetuated through romantic desirability. People find certain eye colors more attractive and so the genes for those colors live on. Your eyesight isn’t any better or worse because of a certain color eyes. Light travels through the pupil to the retina and so the color of the iris doesn’t change what you see. This is why cosmetic contact lenses can change your eye color without changing what you are seeing – they are only covering the iris.
There are some minor effects of having different eye colors. Because of how light is scattered and absorbed inside the eye, lighter colored eyes are sometimes more sensitive to bright light leaving some to squint more as well as needing to wear sunglasses more frequently. Driving at night can also be difficult because the glare from oncoming traffic can be more harsh. People with light colored eyes are more likely to develop macular degeneration, but people with brown eyes are more likely to develop cataracts. Finally, people with light colored eyes tend to perform better in sports where they control the hand-eye coordinated action such as in bowling, golfing, pitching, etc. Brown-eyed people tend to be better at sports where they react such as hitting a ball, boxing, playing defense, etc.
Added info: Getting reliable statistics on eye colors is difficult. That said brown eyes are the most common color type in the world at somewhere between 55-79%. Gray eyes seem to be the rarest at less than 1%. The eyes of some babies start out as blue but eventually become green or brown as their eyes develop more melanin. Few blue-eyed babies will have blue eyes as adults.
Liz Taylor was said to have violet eyes, but in reality she had blue eyes but with a genetic mutation to have double eyelashes which made her eyes look more purple. David Bowie was known for having two different colored eyes, but this was also an illusion. When he was 15 he got in a fight over a girl and his left eye was damaged leaving the pupil permanently dilated. This gave the impression that he had one black eye and one blue eye, but in reality both of his eyes were blue.
Leaves change colors in autumn because the temperature drops and the hours of sunlight diminish.
The leaves of deciduous plants change color as summer ends and autumn begins. Cooler temperatures and fewer hours of sunlight trigger chain reactions in how plants operate. When leaves are green it’s because of an abundance of chlorophyll which absorbs sunlight and produces simple sugars to feed the plant. Autumn’s seasonal changes tell plants to produce less chlorophyll. Autumn also tells plants to start producing special corking cells at the base of each leaf stem. These cells reduce the flow of nutrients (including chlorophyll) into or out of the leaves and eventually completely seals off the leaves from the branches allowing them to fall off.
So as chlorophyll production decreases, and leaves become sealed off from the rest of the plant, leaves can no longer stay as green as they were. What color they become depends on the plant.
Yellows and Oranges
Chlorophyll is the big green machine, it outweighs all other chemicals present in a leaf. As there is less and less chlorophyll present, the carotenoids that have been present the whole time begin to be visible. These chemicals give leaves different shades of yellow and orange depending on the plant. Carotenoids are also what make carrots orange.
Unlike carotenoids which are present in leaves all year, anthocyanins are pigments specially produced just for autumn. The end of summer triggers the production of anthocyanins which give leaves the deeper colors of reds, purples, and even blacks. Anthocyanins also are what give cranberries, cherries, and blueberries their colors.
Much like a corporation, change takes place from the top down. Higher elevations see leaves change colors earlier than lower elevations. This is because higher elevations reach the cooler temperatures necessary to trigger the change before the same plants in lower elevations which have warmer temperatures.
Change also takes place top down in another way. The top leaves of a plant will typically begin changing color before the bottom leaves. This is because the top leaves are furthest from the roots and, since it takes more effort to send chlorophyll to the top of a plant, the top leaves will start losing their green color first.