1903: First Successful Airplane

The entire flight lasted only twelve seconds. The consequences will last for all of human civilization. On December 17, 1903 just outside Kitty Hawk, North Carolina, Orville and Wilbur Wright completed the first successful sustained and controlled heavier-than-air, powered flight.

A Photograph of the Wright Brothers First Successful Flight
The Wright Brothers First Successful Flight

Flight Before the Airplane

First Hot Air Balloon Demonstration at Annonay, France, June 1783
First Hot Air Balloon Demonstration at Annonay, France, June 1783

Flight had been attempted long before the Wright Brothers success. People have always dreamed of soaring with the birds. Flying kites have been used in China dating to hundreds of years BCE. Renaissance artist and inventor Leonardo Da Vinci famously sketched but never constructed flying designs in his notebook. The earliest successful attempts at flight were conducted by lighter than air balloons and winged gliders.

The first hot air balloon flight was demonstrated in June 1783 by French brothers Jacques-Etienne and Joseph-Michel Montgolfier. Three months later another of their balloons carried the first living beings into the air – a sheep, a duck, and a rooster. Then on November 21, 1783 the first manned hot air balloon flight took place. The flight carried a doctor and an army office a distance of six miles over Paris, France.

Earlier gliders were the first heavier than air, unpowered flyers. It required the pilot to launch himself into wind from an elevated location. The special shape of the wing provided lift to keep the glider in flight. The first successful gliding flight occurred in 1849 and was based on the design of Sir George Cayley. The most influential figure in gliding was Otto Lilienthal. From 1981-1986 he performed over 2000 successful flights testing out different gliding designs. His fame influenced many others to experiment with flight, including the Wright Brothers.

Enter the Wright Flyer

In order to achieve flight a device must produce more lift than its weight. Balloons achieve this through being less dense than the air around them. Gliders use specially shaped wings to glide through the air. Their drawback is they lack a source of thrust needed for sustained flight.

The Wright Brother initially experimented with gliders so that they could master flight balancing and control while continually improving their design. From 1900-1902 the Wright Brothers constructed and tested various glider designs at Kitty Hawk. They initially struggled with producing enough lift in the gliders. Their gliders were only producing about 1/3 of the calculated lift when derived from the established equation of lift. This led the brothers to question the value of the Smeaton coefficient in the equation which determined the value of air pressure. They collected their own aerodynamical data and determined the value of Smeatons coefficient to be close to 0.0033, different from the accepted value of 0.0054. With more accurate data they were now better able to construct more reliable designs.

In 1902, armed with more accurate data they decided to build one more glider before attempting to build a powered airplane. Improvements were made to the rudders and control system. Between September and October they made over 500 successful flights, some to a height of 600 feet. They were now ready to construct their airplane.

The constructed airplane had to account for the added weight of the 200 pound propulsion system. This required an increased wing area to over 500 square feet, along with other improvements in wing design to increase lift. They constructed their own, crude engine, made of lightweight aluminum capable of producing twelve horsepower. They used a chain and sprocket transmission system, one similar to a bicycle, to transmit the energy from the engine to the propellers.

In late 1903 the brothers returned to Kitty Hawk to test out their airplane. They chose the name Wright Flyer for their plane. They conducted four successful flights on December 17, 1903. The longest covered a span of 859 feet and lasted 59 seconds.

The World Takes Flight

Appollo 11 Space Launch
Apollo 11 Space Launch to the Moon
(Credit: Wikimedia Commons)

The success of the Wright Brothers was initially met with much skepticism, but they soon showed the way for many others to follow. In 1909 Louis Bleriot flew across the English Channel, a distance of 25 miles. Charles Lindberg traversed 3,000 miles across the Atlantic Ocean in 1927.

The jet engine, first produced in 1939 in Germany, was another significant milestone in flight. The jet engine allowed aircraft to fly higher where the air is thinner and drag is reduced. Soon flight had revolutionize the world, both in military and commercial contexts. Air superiority played a key world in the outcome of WWII. It was only a few more decades before man was walking on the moon. The Voyager Space Program has two probes outside of our solar system. Flight technology will play a pivotal role in humanity’s next frontier: space exploration.

Continue reading more about the exciting history of science!

1820: Discovery of Electromagnetism

Electromagnetism was discovered at the turn of the second decade of the 19th century. It is the branch of physical science where electricity and magnetism come together. Like many discoveries in the history of science it was not discovered in a single stroke of genius but rather by the additive work of many great thinkers over a vast stretch of time. The effects of this monumental discovery cannot be overstated, especially in today’s technological world. The principles of electromagnetism form the basis for nearly all electronic devices in use today – radar, radio, television, the internet, the personal computer, to name a few. We take these devices and the fact that they work for granted but understanding what events lead to these discovering principles they are made on can prove illuminating.

Electromagnetic Wave
Electromagnetic Wave

Initial Observations of Electricity and Magnetism

Compass Rose
Compass Rose

When the electric and magnetic forces were first identified they were considered to be separate forces acting independently of one another. The effects of these two forces were observed as far back as 800 BCE in Greece with the mining of lodestone, a mineral containing natural magnetic properties. Lodestone was later used in the production of the magnetic compass. The Chinese Han Dynasty first developed the compass in the second century BCE and this invention proved to have a profound impact on human civilization. Modern systematic scientific experiments only began in the middle of the 16th century with the work of William Gilbert. Over the centuries there were many more advancements in knowledge in electric and magnetic forces made by scientists such as Otto van Guericke, Pieter van Musschenbroek, Benjamin Franklin, Joseph Priestly, Alessandro Volta, Luigi Galvani, and many others.

Two Key Discoveries Begin the Process of Unification

The first major hints that they were one force occurred during a lecture on April 21, 1820 when the Danish physicist Hans Christian Oersted noticed that his compass needle moved while in the presence of an electric current. Specifically, he found that if a wire carrying an electric current is placed parallel to the needle, it will turn at a right angle to the direction of the current. This observation prompted him to continue his investigations into the relationship. Several months after his lecture Oersted published a paper demonstrating that an electric current produces a circular magnetic field as it flows through a wire. Oersted’s paper demonstrated that electricity could produce magnetic effects, but this raised another question. Could the opposite happen? Could magnetism induce an electric current?

In 1831 Michael Faraday provided the answer. He showed this additional relationship between electricity and magnetism by demonstrating in a series of experiments that a moving magnetic field can produce an electric current. This process is known as electromagnetic induction. An American physicist, Joseph Henry, also independently discovered the same effect around the same time. However Faraday published his results first. Faraday’s and Oersted’s work showed that each force can act on the other, that the relationship works in both ways. The discovery of electromagnetism was now beginning to come into focus. In order to fully synthesis these two forces into one a mathematical model was needed.

A Mathematical Synthesis of Electricity and Magnetism

By 1862 James Clerk Maxwell had provided the necessary mathematical framework to unite the forces into his unifying Theory of Electromagnetism. Oersted’s and Faraday’s discoveries provided the basis for his theory. Indeed, Faraday’s law of induction became one of Maxwell’s four equations.

His theory also made some radical predictions for the time that were difficult for most to digest. It suggested that the speed of electromagnetic waves are the speed of light, a highly unlikely coincidence Maxwell thought. His equations also predicted the existence of other waves traveling at the speed of light. These idea’s received little traction in the scientific community at large until 1887 when the German physicist Heinrich Hertz discovered radio waves. The once radical predictions derived by Maxwell’s theory had been verified.

The Electromagnetic Spectrum
The Electromagnetic Spectrum
(Credit: Creative Commons)

The discovery of electromagnetism changed the course of human civilization. Today it is understood as one of the four fundamental forces in nature, along with the gravitational force, the strong nuclear force and the weak nuclear force. By understanding and applying its principles human cultures have sparked a revolution in technology and electronics. The story of its discovery highlights the power of the scientific system of thought. Our modern world as we know it would not exist without this insight into this incredible force of nature.

Continue reading more about the exciting history of science!

1860s: Germ Theory of Disease

The Germ Theory of Disease seems like common knowledge to just about everybody today, but this has not always been the case. Throughout most of history the causes of disease, the breakdown of the normal functioning of the human body – have been a mystery. This is not to say that a lack of ideas have existed.

A Brief History of Disease Theory

Early medical practices were awash in trial and error methods and a mix of science and superstition. One popular belief developed by the ancient Greeks and believed throughout the Middle Ages was known as humorism. Humorism states that the body contained four humors which were linked to the four major bodily fluids plus the four elements. An imbalance in humors was believed to cause disease. Cures revolved around restoring this balance.

Precursor to Germ Theory: The Four Humors
Humorism
(Credit: Tom Lemmens)

Religious and supernatural forces were also strongly believed to be tied to disease. Divine retribution may cause individual sickness or a demonic force would cause epidemics. However by the time of the Industrial Revolution the leading theory on disease was that bad air or “miasma” was believed to spread contagious diseases. This miasma would emit from rotting, dead organic matter to cause plagues, cholera, and various other diseases. People were instructed to avoid things such as decaying vegetation, corpses, and manure.

The invention of the microscope altered the landscape of medical science. Bacteria and viruses were discovered which led to a speculation of a germ theory of disease for centuries. In the mid-19th century The Hungarian physician Ignaz Semmelweis noticed that Puerperal Fever could be drastically reduced simply by washing hands. Various publications reported that the mortality rate could be reduced to around 2% from the mid-20s. Curiously, these were completely ignored by the medical community. It is for this we get the metaphor Semmelweis reflex – the tendency to reject new knowledge because it contradicts established norms.

The Triumph of the Germ Theory of Disease

Beginning in the 1860s, the work of Louis Pasteur finally proved the Germ Theory of Disease. Pasteur first began to study the validity of spontaneous generation, the popular idea that living things spontaneously emerge from nonliving matter. He conducted several clever experiments showing that microorganisms cause fermentation. In one experiment he showed that fermentation would not take place in a solution sterilized by heat. He correctly attributed this result to the absence of living microorganisms in the sterilized solution. Through a variety of experiments in fermentation he was able to prove that specific microbes can bring about specific chemical changes.

Louis Pasteur performing an experiment
Louis Pasteur carrying out an experiment

Having gained some notoriety for his work on fermentation, in 1865 the French government asked Pasteur to study two diseases of silkworms that were devastating the French silk industry. He accepted the task, discovered again that microorganisms were the culprits, and saved the French silk industry. These experiments, along with his other work, proved that all living things must have parents.

Anthrax Cells
Anthrax Cells
(Credit: U.S. Army Medical Research Institute of Infectious Diseases)

A decade later Pasteur began to study anthrax. His work with anthrax and anthrax immunization proved to everyone once and for all the validity of the Germ Theory of Disease. In 1881 at a farm southeast of Paris, Pasteur conducted a public experiment to demonstrate his anthrax vaccine. He inoculated twenty-five sheep with weakened anthrax microbes. Two weeks later he injected those sheep and others with active anthrax. All twenty-five inoculated sheep survived, the rest all perished.

Along with Pasteur, Robert Koch is also recognized as having contributed to placing the Germ Theory on sound, scientific footing. He developed four basic criteria known as the Koch postulates that establish a cause and effect relationship between a microbe and a disease. It should be noted that viruses were later identified and cannot be cultured, so his postulates do not apply to them. Koch’s postulates are:

  1. The microbe must be found in the diseased animal, but not the healthy animal
  2. The microbe must be isolated in the diseased animal and grown in a pure culture
  3. When the cultured microbe is introduced to a healthy animal, the animal develops the disease
  4. The microbe must be re-isolated from the experimentally infected animal

An Immediate Impact to Health

The Germ Theory of Disease allowed us to discover exactly what cause certain diseases. Once the method was accepted and understood it quickly led to the identification of many specific, disease-causing microorganisms. This led to vaccinations and cures to many of the diseases. Also steps could be taken to prevent diseases too. Personal and hospital hygiene was improved with the aim at reducing the transfer of microbes from one source to another. As a result, life expectancy quickly began to rise at a rate unseen in human history. This is the power of science.

Continue reading more about the exciting history of science!

1911: Atomic Nucleus

The atomic nucleus is the tiny, dense, center of the atom. It’s surprising discovery was announced to the world by the physicist Ernest Rutherford at a meeting of the Manchester Literary and Philosophical Society in March 1911.  Two months later he published a scientific paper reporting his findings.

The Rutherford Experiment

Nuclear Structure of the Atom
Nuclear Structure of the Atom

Prior to Rutherford’s discovery of the atomic nucleus, the prevailing atomic model was the “plum pudding” model devised by J. J. Thomson, who discovered the electron in 1897.  Thomson proposed a model of the atom that consisted of a diffuse cloud of positive charge with the negatively charged electrons scattered within it.  However, there was a lack of experimental evidence to support this model, along with other issues such as the problem of atomic stability, and conflictions with observations of atomic behavior.

Rutherford, along with his colleagues at the University of Manchester, set out to further investigate the structure of the atom using alpha particles.  Alpha particles are a form of radioactive decay that have a positive charge.  Rutherford’s team fired a beam of alpha particles at a thin sheet of gold foil.  To create his beam Rutherford placed radium inside a lead box with a small pinhole directed at the sheet of gold foil.  The lead box absorbed most of the alpha decay from the radium except for the small beam that escaped through the pinhole. The foil was completely surrounded with a detector that could locate where they alpha particles ended up after they passed through the foil.  Rutherford specifically used gold for its malleable properties.  Gold can be hammered into incredibly thin layers which was needed so that the alpha beam could pass through it.  

Most of the alpha particles passed through the foil as if it was going through empty space.  Occasionally a few alpha particles – about 1 in 20,000 – were reflected straight back towards the source.  This was a highly unexpected result that could not be explained by the plum pudding model.  Rutherford concluded that most of the mass of an atom must be concentrated in a tiny, dense region of the atom which he called the nucleus.  He proposed that the atom had a central nucleus where all the positive charge and most of the mass was concentrated, and the negatively charged electrons orbit this nucleus.  His model of the atom resembled the planets in the way that they orbit the Sun in the Solar System.

Rutherford's Experiment
Rutherford’s Gold Foil Experiment
(Credit: Wikimedia Commons)

The Atomic Nucleus and Structure of the Atom

Rutherford’s model of the atom contained the concept of the nucleus, a significant departure from the plum pudding model.  It consists of positive electrically charged protons and the slightly heavier electrically neutral neutrons. Although Rutherford failed to discover the neutrons himself he predicted their existence and his student James Chadwick discovered them in 1932. Orbiting the nucleus are the negatively charged electrons.

One of the most striking implications of this model is that atoms are mostly empty space.  The atomic nucleus occupies a space of 1/100,000 of the atom, with electrons occupying the vast region around the nucleus.  This poses the question: what fills the region between the nucleus and the orbiting electrons?  The answer is empty space.  This representation challenges our perception of solid matter, and underscores the weird, mysterious, and counter-intuitive nature of the atomic realm.  

Although Rutherford’s proposed a model of that atom was a significant improvement over the plum pudding model it still had limitations of its own.  Critically, it still could not explain the stability of the electrons’ orbits around the nucleus.  According to classical physics, electrons moving in an orbit would radiate energy and soon – within one second – spiral into the nucleus.   Obviously, atoms are stable and do not collapse in this way.  

The solution to this problem came from Niels Bohr and the quantum mechanical model of the atom. Bohr introduced the concept of quantized orbits for electrons, where electrons could only exist at certain discrete energy levels without radiating energy.  According to Bohr’s theory, electrons moving between orbits actually disappear from one orbit and reappear in the new orbit, without traveling in the space between.  This resolved much of the stability issues despite reinforcing the confusing and unusual ways that atoms behave at the subatomic scale. 

The discovery of the atomic nucleus had profound implications on atomic physics and led to the development of an entirely new field of research, nuclear physics.  It paved the way for the harnessing of nuclear energy, a technology that has the potential to alter the course of human civilization.  

Continue reading more about the exciting history of science!

3400 BCE: Writing

Language is one of the special characteristics that distinguishes humans from other animals.  It allows us to communicate complex concepts and ideas to other people, undoubtedly providing us with a remarkable evolutionary advantage over other species.  Writing is a set of markings used to represent a language.  It augments the benefits of language by making it permanent, allowing the message to travel further and persist through time.  It is why the invention of writing systems often distinguish history from prehistory.

The History of Writing

History of Writing Systems: Cunieform Tablet
Cuneiform Tablet

The history of writing systems traces a complicated journey and much of its detail is lost to time.  Written language emerged around 3400 BCE in Sumer, southern Mesopotamia.  These same industrious people also invented number systems and the wheel. Their writing form is known as cuneiform (cunea, Latin for “wedge”) and consisted of making wedges on clay tablets. It is derived from their proto-writing system of using clay tokens of various shapes as counters for various goods that were produced and exchanged, essentially an accounting system.

The original tokens date back to around 8500 BCE and the system evolved over the millennia in several stages of abstract symbolism. The earliest tokens were of the most basic geometric shapes such as a cone or a square, and later tokens took the shape of more abstract shapes such as miniature tools and fruit. Despite its complexity, each token was a unique geometric shape, such as a cone, and each one representing, with a one-to-one correspondence, a certain type of good. Two cones mean two baskets of grain. No matter your language, if you understood that a cone token meant a basket of grain you could account for the transaction. These tokens were most likely accounts of debt and were stored inside clay envelopes.

This brings us to the original purpose of the Sumerian writing system: accounting – in the recording amounts of grains, numbers of livestock, and various other goods.  As the civilization grew in population size the number of debts increased. Since the tokens were stored inside envelopes their contents could not quickly be known until you opened the envelopes and counted the tokens. Some accountants solved this problem by making wedges on top of the envelopes representing the contents of the envelope. The transition from token to script begun and the worlds first writing system emerged. Eventually clay tablets with markings representing the tokens completely replaced the token system since the impression of the cone on the tablet was identical to the cone token itself.

Egyptian Heiroglyphs, Temple of Kom Ombo
Egyptian Heiroglyphs, Temple of Kom Ombo
(Credit: Wikimedia Commons)

It took around another 400 years until the Sumerian writing system made the shift to create the phonetic signs of speech. This was moving from a clear one to one representation to a more abstract for of representing sounds. This created a big problem for a society inventing a writing system. It has to agree upon a system of symbols or makings to represent spoken sounds.  This agreement would take some time.  Pictorial notations such as a picture of a bird or a tree were easiest to agree upon.  Eventually consensuses were built and writing formats gradually became more formalized, arranging itself it to standardized rows and columns.  The full development of the Sumerian writing system took at least 1,000 years.

Chronological Development of Writing
The Chronological Development of Writing
(Credit: Wikimedia Commons)

It is not certain whether writing originated in a single geographic area (Sumeria) and spread throughout the world by cultural diffusion or if it was invented in a few areas independently. The discovery of scripts in ancient Mesoamerica certainly seems to indicate that it was invented at least more than once. In the Old World it is very likely that only the Sumerians and a few centuries later the Egyptians independently invented their own writing system. It is also possible that the Egyptians borrowed the idea from the Sumerians, nobody knows for sure. The Egyptian writing system is called hieroglyphics (meaning “sacred engravings” in Greek) and are pictorial in form. There are about 1000 distinct characters. It is the most famous and well known ancient form of writing.

Good Ideas Like Writing Spread, Now This Good Idea Spreads Other Good Ideas

Due to the difficulty in inventing writing systems, it is likely that all writing systems have been borrowed and altered from early Mesopotamian writing systems with the exception of the Egyptian, Chinese and Mesoamerican writing systems.  Writing systems also require a long time to fully develop, probably at least a thousand years. Other rudimentary writing systems may have been invented but they were either absorbed, aborted, or replaced due more the established writing systems rapid diffusion.

A Brief Video on the Spread of Writing Systems Across the Globe

In the 16th century BCE the Canaanites simplified the Sumerian and Egyptian pictographic scripts by creating an alphabet of 22 consonants. All of our modern alphabets are derived from this script. Eventually the Greeks introduced characters for vowels, establishing the alphabet to be used for Western Civilization. Once writing spread across the globe itself became the means for spreading other good ideas. A fitting destiny for one of humanity’s most impactful ideas.

Continue reading more about the exciting history of science!

12000 BCE – 4000 BCE: Domestication of Plants

The human domestication of plants was the single most influential event in modern human history.  It is the demarcation of the nomadic lifestyle to the settled, urban lifestyle. Its impacts can be summarized by the agricultural revolution, resulting in a tremendous spike in food production.  The spike in food production led to larger and larger populations, the birth of city states, which marked the dawn of civilization.  The domestication of plants, coinciding with the domestication of animals, has profoundly changed the course of humanity. 

When and Where did the Domestication of Plants Happen?

Domestication of Plants
Cereal Crops

Plant domestication is the alteration of wild plant species into crop plants by human, what can be called artificial selection. The original techniques were likely stumbled upon by accident and the process leading to agriculture was certainly a very slow and gradual one. The earliest domestication of plants followed by the transition to agriculture can be thought of as an evolutionary process rather than an intentional discovery.

We have only fragmentary evidence of its beginnings since it began thousands of years before writing was invented. The earliest evidence suggests that plant domestication began around 12000 BCE with cereal crops. The location was in the area between the Tigris and Euphrates rivers in the Middle East.  Other area’s of the globe soon independently domesticated other crops. Rice was domesticated in China and maize (corn) was domesticated in America’s all by around 10000 BCE.  Herbs such as coriander, vegetables such as sweet potatoes and lentils, and fruits such as figs and plums were also being domesticated by around the same time.

How Did it all Happen?

The process of plant domestication was a complex, slow, and gradual process. In a few places it happened independently but this was a fairly rare occurrence. The most recent evidence suggests agriculture began in no more than ten places independently. The exact number is still debated due to incomplete and inconclusive evidence. Mostly it spread to other areas of the globe through cultural diffusion. 

The road to the domestication of plants was long and curvy, full of cliffs and dead ends.  It involved centuries of trial and error and was subject to local climate, geography, and available plant species. However a few notable factors seem to have been important in its evolutionary process.

  1. The decline of wild animal populations – By around 13000 BCE humans were becoming extremely proficient hunters and large game was beginning to thin out. This made hunting increasingly less rewarding and alternative food strategies increasingly more rewarding.
  2. An increasing abundance of wild edible plants due to a change in climate – Around 13000 BCE the Earth began warming resulting in increased plant life. This made eating plants increasingly more rewarding and provided more opportunities for learning by trial and error by peoples in locations with the highest proportions of these plants.
  3. The cumulative development of food production technologies – In some area’s edible plants were so abundant that people could abandon a nomadic was of life and establish permanent settlements. This provided the opportunity to develop food storage, tools, and production methods.
  4. Population growth led to new food production strategies – The abundance of wild plants led to a surge in populations. This demanded new ways to feed the population. This creates what is none as an auto-catalytic process.

Completing the Process and Establishing a New Way of Life

Egyptian Agricultural Calendar
Egyptian Agricultural Calendar

When humans began moving around less they noticed changes in plant life much better. Some plants were evidently dropped on the trip back to camp. It wasn’t long before people noticed that new plants soon began growing along these well-worn trails. Also, the garbage dumps of food became breeding grounds for plant growth in the following seasons. Some plants require seeds to be spat out and plants began growing in these spots also.

Soon the connection was made between planting these seeds and the growth of crops. Over time this process was refined and improved, and new species of plants were tested. Some species of plants were more easily domesticated than other species but these weren’t distributed across the world evenly, which is why some societies invented agriculture and others didn’t. This new process of controlling nature to grown your own food soon allowed societies to grown a population so large that only agriculture could support them. Life started revolving around agriculture. These larger society with a greater population were able to conquer or assimilate their neighbors spreading the domestication of plants through cultural diffusion. The age of the hunter-gather was ending and the rise of civilization had begun!

Continue reading more about the exciting history of science!

14000 BCE – 4000 BCE: Domestication of Animals

Domestication is the process of selective breeding for human use. The domestication of animals began with the now-lovable dog by at least around 14000 BCE and possibly thousands of years earlier.  As it so often happens with much of prehistory, the archeological record is simply unclear as to the exact time and location of the dogs domestication.  It may have happened as early as 40000 BCE and it also may have happened several times independently.  What is clear is that the domestication of the dog did happen, followed by the goat, pig, sheep, cattle, cat, chicken, horse, and a few other important and well known animals. 

The domestication of animals, along with the domestication of plants, played a key role in the agricultural revolution and in the beginnings of civilization.  Aside from the dog, animal domestication happened slightly later than plant domestication since enormous quantities of plant food was needed to feed the animals. Domesticated animals provided humans with several benefits of enormous value and was essential on humanity’s path towards urban civilization.

Domestication of Animals
Painting Depicting Beasts of Burden
(Credit: Winnifred Neeler, Royal Ontario Museum)

An Increase in Food Production

Grazing sheep and cattle
Grazing Sheep and Cattle
(Credit: www.agupdate.com)

Prior to the domestication of animals all food provided by animals had to be obtained from hunting. This changed after domestication. Each of the domesticated animals could be used for their meat in times of food scarcity or after an unsuccessful hunt.  However providing a source of meat was not their only addition to food production. In addition to meat, cattle, sheep, and goats provided a steady supply of milk and other dairy products.  Once farming became widespread draft animals such as cattle, ox, and water buffalo provided an unprecedented addition of muscle power. 

The increase in food production from first domesticated plants and then animals resulted in radical changes to the human condition. A sustainable and predictable source of food lead to a rapid increase in population density. People were able to disband their nomadic hunter and gather lifestyle and establish permanent settlements. The dawn of civilization was underway.

Additional Uses of Domesticated Animals

Egyptian Bronze Statue of a Cat, University of Pennsylvania Museum of Archaeology
Egyptian Bronze Statue of a Cat, University of Pennsylvania Museum of Archaeology
(Credit: Mary Harrsch, Wikimedia Commons)

In addition to the increase in food production, domesticated animals provided a variety of additional benefits to humans. Around 4000 BCE horses became domesticated allowing for significant improvements in transportation. A person riding a horse could travel double or more the distance and speed of what a person walking or running could travel in a day. The use of the horse was eventually applied to combat leading to superiority in warfare for those civilizations who were able to successfully utilize them.

There were also many other animals that provided significant benefits to human civilizations. Today most people revere cats for being cute and cuddly house companions. But in ancient Egypt cats were revered for their pest control qualities and for their ability to hunt venomous snakes, scorpions, and rodents. An unusually high volume of statues and paintings were dedicated to cats in this culture. Hides of a variety of animals were used for clothing, storage, or shelter. Sheep were prized for their wool that could be spun into clothing, rugs, and a variety of luxury goods.

A Rare Combination of Traits

Not all animals can be domesticated.  Of the world’s roughly 150 large, wild, terrestrial, herbivorous mammals – the ideal candidates for animal domestication – only 14 have been domesticated.  This indicates that there is a specific mix of traits an animal must possess in order to be successfully domesticated.  These traits are:

  1. An efficient diet – Herbivores are much more efficient than carnivores. The conversion of food biomass into the consumer’s biomass is typically around 10 percent. This means if you want to raise 1,000 lb cow you have to grow 10,000 lbs of corn. Large carnivores would be extremely difficult and costly to domesticate because it would take 100,000 lbs of corn to make the 10,000 lbs of herbivore needed to feed the 1,000 lb carnivore. The food preference of the herbivores must also not be finicky.
  2. A quick growth rate – Some herbivore animals such as elephants take decades to reach their full adult size. Cattle on the other hand can reach 1,000 to 2,000 lbs by age three.
  3. A willingness to breed in captivity – Animals such as the cheetah refuse to breed in captivity. In the case of the cheetah it is due to a lengthy and elaborate courtship ritual that cannot take place in a cage.
  4. A friendly disposition towards humans – Large, vicious animals like the grizzly bear will instinctively maul humans making it suicidal to try to domesticate them
  5. A tendency to stay calm or not panic – Nervous species that have a tendency to fight or flight when they precede danger are difficult to domesticate.
  6. A manageable social and herding structure – Living in herds, having a well developed dominance hierarchy that have an overlapping home range is the ideal structure. This rules out solitary animals who are not instinctively submissive.

Continue reading more about the exciting history of science!

1763: Bayes’ Theorem

Bayes’ theorem is a fundamental concept in probability theory.  It was formulated in 1763 by the English statistician and Presbyterian minister, Reverend Thomas Bayes.

History of Bayes Theorem

Thomas Bayes was born in London in 1702 and studied at the University of Edinburgh.  During his time at Edinburgh he was exposed to some of the leading mathematicians and philosophers of his time.  He was elected as a Fellow of the Royal Society where he may have served as a mediator of intellectual debates. He later returned to London to become a minister, but he continued to pursue an interest in mathematics, specifically in probability theory. 

Heading for Bayes Doctrine of Chances (1764)
Heading for Bayes Doctrine of Chances (1764)

Bayes wrote his theorem in order to address the question of how to revise beliefs in the light of new evidence. However, more interestingly it appears that he likely wrote it as a mathematical means to defend Christianity and to combat an argument by David Hume in his 1748 essay Of Miracles, from his book An Enquiry Concerning Human Understanding. In this essay, Hume made the case for dismissing miracles, such as the resurrection of Christ, on the grounds of probability. In effect he argued that the probability for miracles (a violation of the laws of nature) was much more improbable than the probability that miracle was accurately reported. While there is no absolute or direct evidence that Bayes sole motivation to compose his work was to refute Hume’s essay, there is extremely good circumstantial evidence he did at least in part, given the details surrounding the events of his later life and the eventual publication of his work.

Whatever the real motivation for his work may have been, Bayes’ work was published two years after his death, when his friend Richard Price brought it to the attention of the Royal Society and read on December 23, 1763. It was published to following year both in the Philosophical Transactions, the journal of the Royal Society, and as an offshoot. The now famous essay was titled An Essay towards solving a problem in the Doctrine of Chances.  It should be noted that in 1767, Prince published a book titled Four Dissertations, where he explicitly took on the work of Hume and challenged his probabilistic arguments in Of Miracles. He used Bayes results in an attempt to show that Hume failed to recognize the significance of multiple independent witnesses to a miracle, and that the accumulation of even imperfect evidence could overcome the statistical improbability of an event.

As things sometimes happen in the history of science, the theorem initially was largely forgotten, until it was independently rediscovered by the brilliant French mathematician Pierre-Simon LaPlace in 1774. The theorem is used to describe the conditional probability of an event.  Conditional probability tells us the probability of a hypothesis if some event has happened. 

The Goal of Getting Closer and Closer to the Truth

Bayes’ Theorem involves beginning with an educated guess, called a prior probability, and then revising your prediction when new evidence comes in.  As the new evidence is considered the probability of the event is updated give you the posterior probability.  Bayes’ Theorem provides a useful way of thinking by approximation, getting closer and closer to the truth as we accumulate new and relevant evidence.  This is an important point to consider because we are always working with incomplete information in nearly all situations.

Formula of Bayes' Theorem
Formula of Bayes’ Theorem

A Bayesian way of thinking requires us to constantly update our probabilities as new evidence becomes available to us.  This revision does not happen just once but can continually happen. We may never know the truth with 100% certainty, for example we can never be 100% certain the sun will rise tomorrow.  But with Bayesian thinking we can be 99.999999% sure which tells us we’re getting very close to the truth and gives us a high degree of confidence in the proposition. Bayes theorem helped to revolutionize probability theory by introducing the idea of conditional probability – probability conditioned by evidence. If you have an extraordinary hypothesis, it should require extraordinary evidence to convince you that it’s true.

Practical Uses of Bayes Theorem

Bayes Theorem has relevance in any avenue of life because it is a form of probabilistic thinking.  If you think about it, everything you and happens to you in life is probabilistic in nature.  The theory’s flexibility and versatility provide the ability to make both life and business decisions under conditions of uncertainty.  Here are a few examples of Bayesian theory used in the real word. In biology it is used for medical diagnosis, genetics, and the spread of infectious diseases.  In computer science it is used in speech recognition, search engine algorithms, spam filtering, and weather forecasting.  Its practical examples are almost limitless.  Ultimately, it is a learning process, with more observations and evidence leading to better certainty.  Lets take a look at one interesting application of Bayes theorem in a real word setting. 

The theorem was used to crack the Nazi Enigma code during WWII.  The Enigma code was an encryption machine that the Germans used to send secure messages.  Its effectiveness was that its cipher system was changed daily.  Alan Turing, the brilliant British mathematician, used Bayes Theorem to break down an almost infinite number of translations based on messages that were most likely to be transmitted.  For example, messages from German U-boats were most likely to transmit messages containing information about the weather or allied shipping.  The strong priors thus greatly reduced the possible translations to be deciphered and sped up to time to crack the code.  Eventually he and his staff invented a machine known as The Bombe, which ultimately cracked the German Enigma Code. The use of Bayes’ theorem in cracking the Enigma code was a monumental breakthrough for the Allies, as it provided them access to critical information about German military operations. It provided a significant strategic advantage in the war effort and played a key role in their eventual victory.

Bayes’ theorem continues to impact statistics and society to this day. In recognition of Bayes’ contribution to the development of probability theory, the Bayesian Analysis journal was established in 2006 as a peer-reviewed academic journal dedicated to Bayesian statistics. Additionally, they Thomas Bayes Award is awarded every two years by the Royal Statistical Society to recognize outstanding contributions to the field of Bayesian statistics. The continuing relevance of Bayes’ theorem is a testament to the enduring legacy of Thomas Bayes and his contribution to the field of probability theory.

Continue reading more about the exciting history of science!

1905: Special Relativity

Special Relativity is a theory proposed by Albert Einstein that attempts to explain the relationship between space, time and motion.  Einstein outlined his proposal in a scientific paper titled “On the Electrodynamics of Moving Bodies” published on September 26th, 1905. 

Approaching the Limits of Classical Physics

Einstein's famous E=mc^2 equation shows the equivalence of mass and energy
Einstein’s famous E=mc^2 equation shows the equivalence of mass and energy

Isaac Newton’s laws of motion were a wildly successful explanation of the physics of motion, and a dominate force in scientific understanding of physics for over two centuries.  However, by the turn of the 20th century some critical shortcomings in the explanatory power of Newton’s famous laws were becoming evident.  One of the major phenomena classical physics couldn’t explain was the behavior of light.  Classical physics assumes that space and time are absolute and that the speed of light is invariant.  It is simply added to or subtracted from the speed of a light source.  However, experimental evidence, particularly from the Michelson-Morley experiment of 1887, which demonstrated that the speed of light in a vacuum is independent of the motion of the Earth about the Sun, suggested this is not the case.  Their experiments suggested that the speed of light is constant for all observers, regardless of the speed they are traveling at. 

Working out a Swiss patent office, the 26-year old Einstein solved these problems with his Theory of Special Relativity and later his Theory of General Relativity.  Special relativity describes motion at a constant speed traveling in a straight line (a special case of motion, hence the name) while general relativity describes accelerating motion.  Interestingly, his famous equation E=mc^2 did not actually appear in the original paper but was added essentially as an addendum a few months later.

Einstein’s Theory of Special Relativity is based on two principles:

  1. The laws of physics are the same in all inertial frames of reference
  2. The speed of light is the same in all inertial frames of reference (186,000 miles/second)

This implies that an observer at rest observes light traveling at a speed of 186,000 miles/second while another observer traveling at 180,000 miles/second relative to the observer at rest also observes light traveling at 186,000 miles/second. Based on these principles several interesting conclusions can be worked out.  These conclusions are, or course, relative to an observer at “rest.”

  1. Time slows as speed increases
  2. Mass increases as speed increases.
  3. Mass and energy are equivalent as exemplified through Einstein’s famous E=Mc^2 equation
  4. The length of an object contracts as speed increases

Testing the Theory and Acceptance

As unlikely as these conclusions seem they have been verified repeatedly through many experiments.  The conclusions seem counter-intuitive because many of these effects are only perceptible at near-light speeds.  Perhaps this is why the acceptance of special relativity was not immediate and was met with harsh criticism by many scientists.  The idea that measurements of time and length could change with velocity conflicted with everyday experiences.  However, the aesthetic beauty of its mathematics initially attracted interest among some scientists, prompting further inquiry.  

The tide of acceptance began to turn as the predictive power of special relativity was confirmed by experimentation.  The most famous example was the 1919 solar eclipse experiment, led by Sir Arthur Eddington, where light was shown to bend around the mass of the sun.  This experiment was more about general relativity, however the success of general relativity helped to bolster the acceptance of its counterpart, special relativity.  

Experimental confirmation also came from the field of atomic physics.  The most famous and direct experimental verification came through nuclear reactions in 1932, when James Chadwich discovered the neutron.  Subsequent experiments in atomic physics with experiments involving nuclear fission and fusion continued to confirm the validity of E=mc^2.   

Another famous experiment that validated special relativity was a time dilation experiment done in 1971 using high precision atomic clocks flown on commercial airlines.  By comparing the time measured by the atomic clocks on the airplanes to those on the ground, experimenters confirmed that the clocks on the airplanes ran slightly slower than the stationary clocks on the ground, in accordance with special relativity.  Further experiments using particle accelerators and high energy experiments have provided validation by demonstrating the increase in mass and the time dilation of particles moving at relativistic speeds.  

By the beginning of the 21st century special relativity has been fully accepted and fully integrated into the fabric of modern physics.  It plays a crucial role in the application of many technologies, including the development of GPS, particle accelerators, and nuclear energy.

Continue reading more about the exciting history of science.

1977: Voyager Program

The Voyager Program represents an ambitious undertaking in exploring the boundaries of the Solar System, and beyond.  The program consists of two spacecraft, Voyager 1 and Voyager 2, launched by NASA in 1977 in order to probe the four outer planets of the Solar System as well as their moons, rings, and magnetospheres.

Background and Objectives of the Voyager Space Program

The primary objective of the Voyager space program was to complete a comprehensive study of the outer edges of our Solar System.  This mission was made possible due to a bit a good luck.  Known as The Grand Tour, a rare geometric alignment of the four outer planets that happens roughly once every 175 years allowed for a single mission to fly by all four planets with relative ease.

Originally, the four-planet mission was deemed too expensive and difficult, and the program was only funded to conduct studies of Jupiter and Saturn. and their moons.  It was known, however, that fly-by of all four planets was possible.  In preparation of the mission over 10,000 trajectories were studied before two were selected that allowed for close fly-by’s of Jupiter and Saturn.  The flight trajectory for Voyager 2 also allowed for the option to continue on to Uranus and Neptune.  

The Different Instruments of the Voyager Spacecraft
The Different Instruments of the Voyager Spacecraft
(Credit: Nasa.gov)

The two Voyager spacecraft are identical, each equipped with several instruments used to conduct a variety of experiments. These include television cameras, infrared and ultraviolet sensors, magnetometers, plasma detectors, among other instruments.

In addition to all of its instruments, each Voyager spacecraft carried on it an addition interesting item called the Golden Record.  The Golden Record is a 12-inch gold-plated copper disk designed to be playable on a standard phonograph turntable.  It was designed to be kind of time capsule, intended to communicate the story of humanity to any extraterrestrial civilization that might come across it.  The Golden Record contains a variety of sounds and images intended to portray the diversity of culture on Earth.  This includes:

  • greetings in 55 languages, including both common and lesser-known languages.
  • a collection of music from different cultures and eras including Bach, Beethoven, Peruvian panpipes and drums, Australian aborigine songs, and more
  • a variety of natural sounds such as birds, wind, thunder, water waves, and human made sounds such as laughter, a baby’s cry and more.
  • various images such as human anatomy and DNA, plant and animal landscapes, the Solar System with its planets and more
  • a “Sounds of Earth” Interstellar Message, featuring a message from President Jimmy Carter and a spoken introduction by Carl Sagan
The Golden Record from the Voyager Space Mission
The Golden Record from the Voyager Space Mission

A committee chaired by the astronomer Carl Sagan was responsible for selecting the content put on the record.  The value of the Golden Record is, in Sagan’s own words:

“Billions of years from now our sun, then a distended red giant star, will have reduced Earth to a charred cinder. But the Voyager record will still be largely intact, in some other remote region of the Milky Way galaxy, preserving a murmur of an ancient civilization that once flourished — perhaps before moving on to greater deeds and other worlds — on the distant planet Earth.”

Carl Sagan

The Launch, Voyage and Discoveries

Voyager 2 was launched on August 20,1977 from the NASA Kennedy Space Center at Cape Canaveral, Florida, sixteen days earlier than Voyager 1.  The year 1977 provided a rare opportunity where Jupiter, Saturn, Uranus, and Neptune were all in alignment allowing Voyager 2 to fly by each of the four planets.  Voyager 1 was on a slightly different trajectory and only flew by Jupiter, Saturn, and Saturn’s largest moon Titan. 

Voyager Space Probe
Voyager Space Probe

Voyager 2’s fly-by of Jupiter and Saturn produced some important discoveries.  It provided detailed and close up images of both planets and its moons.  While much was learned about each planet, its useful to note one important discovery of Jupiter and Saturn.  On its fly-by of Jupiter, Voyager 2 revealed information on its giant red spot such as its size and structure (a complex storm with a diameter greater than the Earth!), dynamics, and its interaction with the surrounding atmosphere.  On it’s fly-by of Saturn, Voyager 2 revealed information on its rings such as its structure (close up images revealed the rings are made up of countless, individual particles), dynamics, and various features.

After the success of the Jupiter and Saturn fly-by’s, NASA increased funding for Voyager 2 to fly by Uranus and finally Neptune.  Currently both spacecraft are leaving the Solar System as they continue to transmit data back to Earth.

Continue reading more about the exciting history of science!