1866: Laws of Inheritance

The laws of inheritance are a set of fundamental principles that govern the transmission of genetic traits from one generation to the next. Its discovery and understanding have changed our view of life while having a profound impact on a diverse range of topics such as medicine, agriculture and biotechnology. The ideas behind the laws of inheritance, the theory of evolution by natural selection, and population genetics has formed what scientist’s call the modern synthesis, a cornerstone of modern biology.

Gregor Mendel and the Pea Plant Experiments

For most of history peoples understanding about inheritance came from anecdotal evidence and observations of certain traits being passed down from parents to offspring. It wasn’t until the mid 19th century that the Augustinian monk Gregor Mendel conducted his now famous experiments with pea plants that established the principles of heredity. Prior to Mendel’s experiments the prevailing theory of inheritance suggested a blending of traits and characteristics from both parents to their offspring.

Gergor Mendel's pea plant experiment
(Credit: Encycleopedia Britannica)
Gergor Mendel’s pea plant experiment
(Credit: Encycleopedia Britannica)

In 1866 the Augustinian monk Gregor Mendel published Experiments in Plant Hybridization that explained his pea plant experiments and the resulting laws of inheritance.  His work was first read to the Natural History Society of Brünn then published in the Proceedings of the Natural History Society of Brünn

During the years 1856 to 1863 Mendel cultivated over 28,000 plants and tested for seven specific traits   The traits he tested for were:

  • Pea shape (round or wrinkled)
  • Pea color (green or yellow)
  • Pod shape (constricted or inflated)
  • Pod color (green or yellow)
  • Flower color (purple or white)
  • Plant size (tall or dwarf)
  • Position of flowers (axial or terminal)

The results of his careful experimentation allowed Mendel to formulate some general laws of inheritance. His three laws of inheritance are:

  • Law of Segregation – allele pairs (one form of a gene) segregate during gamete (sex cells: sperm or egg) formation. Stated differently: each organism inherits at least two alleles for each trait but only one of these alleles are randomly inherited when the gametes are produced.
  • Law of Independent Assortment – allele pairs separate independently during the formation of gametes.
  • Law of Dominance – when two alleles of a pair are different, one will be dominate while the other will be recessive.

Mendel’s laws of inheritance suggested a particulate inheritance of traits in which traits are passed from one generation to the next in discrete packets.  As already noted, this differed from the most popular theory at the time which suggested a blending of characteristics in which traits are blended from one generation to the next.

Unfortunately for the progress of science, Mendel’s work was largely unnoticed and forgotten during his lifetime.  This was for a few reasons.  First, he lived in relative isolation at the Augustinian St. Thomas’s Abbey, now the modern day Czech Republic, and did not have a network of scientific colleagues.  He published his work in relatively obscure scientific journal and did not have the means to promote his findings.  His work, in a sense, was also ahead of his time.  The scientific community was simply focused on other areas of study during his lifetime and the concept of discrete hereditary units (now called genes) did not fit in with the prevailing scientific paradigm.  Lastly Mendel did little follow up to his work and soon shifted his attention to administrative and educational duties within the abbey.  It wasn’t until the turn of the 20th century that his work was rediscovered and popularized independently by three scientists – Hugo de Vries, Carl Correns, and Erich von Tschermak.  

A Journey into Genetics

Mendel’s laws of inheritance laid the groundwork for the 20th century field of genetics.  The field of genetics is the study of heredity that incorporates the structure and function of genes as the mechanism of biological inheritance.  

The emergence of molecular genetics began to take shape after it was discovered that the mechanism of hereditary transfer was contained in nucleic acids.  The race was on to discover the mechanism by which nucleic acids transferred the hereditary material.  The final breakthrough culminated with the discovery of the double-helical structure of DNA by James Watson and Francis Crick in 1953, as it provided the definitive explanation for how genetic information is encoded and transmitted within living organisms.  

The field of genetics continues to advance into the 21st century at a blistering pace.  In addition to unraveling the fundamental principles of life, scientists are now able to exploit the mechanics of genes and are learning novel ways to edit them to cure disease.  As of late 2023, the United States Food and Drug Administration (FDA) and medical regulators in the United Kingdom have approved the world’s first gene-editing treatment for Sickle Cell Disease using a gene-editing tool called Crispr.  Crispr technology has the potential to revolutionize the field of genetics and various related fields through its precise genome editing capabilities, potentially leading to another exciting development in the exciting history of science!

Continue reading more about the exciting history of science!

1859: On the Origin of Species

In 1859 Charles Darwin published his landmark book On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life.  This books marks the beginning over modern evolutionary biology.  Darwin used the phrase “decent with modification” to describe the process of evolution. 

The Evolution of the Idea of Evolution

Prior to Darwin the subject of evolution was subject to much speculation, uncertainty and debate.  The idea that living organisms change over time was controversial for a variety of reasons.  The idea is not self-evident because of the immensely long timescales required to make evolution happen.  More controversial was its direct contradiction with the teachings of Christianity, where the Bible claims that God created species in their current from.  However, by the 18th century some evidence began turning up in support of the idea that organisms have changed over time.  

The Swedish botanist Carl Linnaeus developed a system for naming and classifying living organisms.  His work provided some illuminating insights and a more comprehensive understanding into the relationships between different species. 

The Evolutionary Tree of Life
The Evolutionary Tree of Life
(Credit: Leonard Eisenbreg)

In the early 19th century, the French naturalist Jean-Baptiste Lamarck proposed an early version of a theory of evolution. During this time the topic of evolution was of interest and debate among some naturalists. His theory centered on two main principles: on the law of use and disuse, and the inheritance of acquired characteristics. Lamarck’s theory, although not considered to be a valid mechanism for evolutionary change, was influential in shaping early evolutionary thought. His ideas sparked further interest, discussion and debate on the topic.

The Development of Darwin’s Idea

Charles Darwin developed his theory of evolution by natural selection over the course of many years through careful observation, research and study. Darwin had his first insight into evolution on his famous Voyage on the HSM Beatle from the years 1831-1836.  During this voyage he collected a huge number of specimens made many important observations, especially in South America and the Galapagos Islands.

This map traces the route of Charles Darwin's voyage on the HMS Beagle from 1831-1836.
This map traces the route of Charles Darwin’s voyage on the HMS Beagle from 1831-1836.

The Galapagos Islands provided the location from some of Darwin’s most crucial observations.  While traveling to the different islands he noticed slight variations among similar species from one island to another.  For example the shapes of the beaks of finches differed depending on their diet and environment.  After returning home from his voyage, Darwin continued to reflect and do research while further developing his theory.  

Darwin was also heavily influenced by Thomas Malthus, an British economist who wrote about population growth.  Malthus proposed the populations tend to grow exponentially, while food production grows at a much slower rate, leading to competition and a struggle for survival.  Darwin expanded on this idea, leading to his concept of natural selection.  He spent over two decades refining his ides before finally publishing his theory in “On the Origin of Species” in 1859.

The Origin of Species: Darwin’s Theory of Evolution by Natural Selection

On the Origin of Species
The Origin of Species by Means of Natural Selection

Darwin’s book begins with the topic variation under domestication, or what some call artificial selection.  While the natural environment or habitat of a species selects which traits most help an organism survive and reproduce humans have the ability to select for specific traits in domesticated plants and animals.  Gradual changes, such as selecting for speed in an English racehorse, accumulate over time and produce noticeable differences from the older ancestors.  Darwin makes clear early on that heredity is key in explaining variation.

The book then progresses into a discussion on natural selection.  He talks on the idea the struggle for existence and the limits to population increase as forces pressuring species.  He credits Thomas Malthus for his contributions to this idea.  He explains his main principle of evolution by natural selection which can be summed up as the cumulative adaptation to environmental pressures.  Very gradually, over long stretches of time, species become more and more adapted to the environment in which they live.

Darwin provides a preponderance of evidence for his theory.  He uses Charles Lyell’s work on geology to show similarities between uniformitarianism and biological evolution.  He links classification schemes to his idea of decent with modification.  He talks about “rudimentary organs” or vestiges, organs left over which no longer have any function.  He shows how the geographical distribution of plants and animals falls into a pattern that supports his theory.  It is because of his careful collection of evidence that his work became so popular and his idea’s so quickly accepted. To this day, Darwin’s Theory of Evolution is considered the cornerstone of biology and essential to understanding the diversity of life on this planet.

Continue reading more about the exciting history of science!

1830: Principles of Geology

Principles of Geology was a groundbreaking work that promoted and popularized James Hutton’s concept of uniformitarianism.  It was published by Charles Lyell in three volumes from 1830 to 1833. 

The State of Geology Prior to Lyell

Principles of Geology
Principles of Geology

Prior to the work of Charles Lyell, the field of geology was still in its infancy. The field as a scientific discipline for the most part lacked a systematic approach. The scientific method of observation and deduction was rarely applied. In addition to the lack of structure, it was heavily influenced by religious interpretations of the Bible and various speculative theories lacking significant evidence to back them.

There were however, a few notable ideas. In the early 19th century, catastrophism was the leading geological explanation of how the Earths features were formed.  Catastrophism is the idea that sudden, quick, and violent events shaped the Earth’s features.  It is easy to see how catastrophism can be aligned with religious narratives, such as the biblical story of Noah’s flood. Many people, including a few scientists of the day, also believed that the Earth was only a few thousand years old. As with catastrophism, some of this belief was based on religious texts.

A major reason for many of these speculative theories was that the scientific discipline of geology was just beginning and did not have well established principles or regulations. There was a poor understanding of geological formations and how to interpret them. The same went for fossils. The concept of extinction was new and not even widely accepted, so the significance of fossils in reconstructing Earth’s history was not fully recognized.

In this mix of confusion and speculative ideas was the concept of uniformitarianism. Uniformitarianism is the idea which says the physical features of the Earth were transformed by slow, gradual forces, such as erosion and sedimentation, which are still at work today.  This idea was first championed by James Hutton, but Charles Lyell was soon to take the baton and run with it.

Principles of Geology

Lyell’s Principles of Geology had a profound impact on the science of geology. The work provided a framework for understanding the Earth’s past based on observable natural processes. One of the more controversial ideas at the time was that it strongly argued for the antiquity of the earth. This idea was met with particularly strong resistance among resistance groups of the day, however the accumulation of scientific evidence has eventually confirmed Lyell’s assertions.

In volume one of Principles of Geology, Lyell offers evidence and lays out his argument for uniformitarianism.  In volume two, Lyell extends this principle to organic processes.  The third volume is largely a syntax of geology, and he defines four periods of the Tertiary. Here are a few summarized points, taken from all three volumes.

  • Uniformitarianism: Lyell argued that the geological processes observed in the present are the same as those that have operated throughout Earth’s history.
  • Gradualism: Lyell argued that geological change occurs gradually, over long periods of time rather than by sudden and dramatic events.
  • Stratigraphy: Lyell recognized the importance of studying rock layers or strata to understand the sequence of events in Earth’s history.
  • Geological Time Scale: Lyell recognized that Earth’s history is extremely long and he divided it into distinct periods based on the fossils found in rock layers.
  • Erosion and Uplift: Lyell explained how the geological process of erosion and uplift can gradually shape the surface of the Earth. He highlighted the role of natural forces, such as wind, water, and ice in shaping various features of the Earth such as wearing down mountains and carving valleys.
  • Volcanism and Earthquakes: Lyell examined the evidence for volcanoes and earthquakes and provided explanations for their occurrence.

Impact on Charles Darwin

Principles of Geology volume two was one of the few books that Charles Darwin took with him on his famous HMS Beagle voyage, which lasted from 1831 to 1836.  Reading this book seeded idea’s in Darwin’s mind that eventually lead to his Theory of Evolution by Natural Selection.  The motto of the book was “the present is key to the past” and Darwin took this idea and naturally extended it to biology. The concept of deep time, that the Earth was many millions of years old, had a profound impact on Darwin’s thinking. Gradual changes over long periods of time later became central to his theory of evolution. It wasn’t only Lyell’s idea’s that influenced Darwin, but his behavior and approach to science. Lyell’s emphasis on careful observation and reliance on evidence also shaped Darwin’s scientific approach.

Continue reading more about the exciting history of science!

1850s: Laws of Thermodynamics

Laws of Thermodynamics

The laws of thermodynamics are some of the most fundamental laws of physics. They describe the way matter and energy move through the universe, so what is thermodynamics? Thermodynamics is the study of heat (thermo) and motion (dynamics), and its relationship to energy.  These laws were not discovered in a single stroke of genius by one individual. Instead the accumulated knowledge of heat, motion and energy that led to the discovery and formulation of these fundamental laws happened gradually over time and was the result of the work of many individuals. Put another way, the laws of thermodynamics emerged gradually over time as the field of science progressed throughout the 18th century.  Today there are four laws of thermodynamics – the first, second third, and zeroth law.  

Discovering the Nature of Heat

People have learned how to make things hot since they first tamed fire, but never understood the true nature of heat. The concept of heat is a fundamental aspect in understanding how the universe works. A serious investigation of the nature of heat began during the industrial revolution.  The massive economic gains as a result of the revolution provided the impetus for the discovery of these laws, with steam power being the hot topic of the day.  People wanted to understand how steam could drive an engine and how to improve its efficiency.

Count Rumford supervising a cannon boring experiment
Count Rumford supervising a cannon boring experiment
(Credit: Sheila Terry)

The solution to the problem came from Benjamin Thompson, also known as Count Rumford.  While Count Rumford did not directly formulate the laws of thermodynamics, he provided the experimental data that was the basis for the laws.  He did this through a series of experiments involving the boring of cannons.  He observed that the temperature would rise as it was bored, and that this rise in temperature was proportional to the amount of work done in the boring process.  He also observed that heat could be generated indefinitely, and this contradicted the competing caloric theory of heat.

Despite Rumsford’s experimentation’s the caloric theory survived for nearly another half century until the German Julius Mayer became the next person to relate mechanical work to heat quantitatively.  Mayers work did not receive much attention shortly after the British scientist James Joule preformed additional experiments on the generation of heat by friction, electricity, and magnetism. The unit of energy, the Joule, is named in honor of his work. The eventual realization that heat is a form of energy was a critical step in the development of the laws of thermodynamics.

“Nothing in life is certain except death, taxes and the second law of thermodynamics.” – Seth Lloyd

Establishing the Laws of Thermodynamics

In 1824 a French military engineer named Sadi Carnot had expressed some of the ideas that would become the second law.  This contribution came from his publication “Reflections on the Motive Power of Fire” where he analyzed the operation of idealized heat engines and introduced the concept of the Carnot Cycle.

It took a few decades for Carnot’s insights into the nature of heat to be formalized as the laws of thermodynamics.  In the 1850s, Rudolf Clausius and William Thomson fully formulated what became known as the first two laws.  It was Clausius who introduced the concept on entropy.  The third law was formulated in the first decade of the 20th century by Walther Nernst.  The zeroth law came last and was formulated by Ralph Fowler in the 1920s.

The zeroth law was discovered after the first three but named the zeroth law as it provides a frame of reference for the first three laws.  The laws of thermodynamics are as follows. 

  1. Zeroth Law – If two thermodynamic systems each are in thermo equilibrium with a third one, they are in thermal equilibrium with each other
  2. First Law – This is also known as the law of conservation of energy.  It states that in a closed system the total energy remains the same and can only be transferred from one form to another.  Therefore energy cannot be created or destroyed.
  3. Second Law – The entropy of an isolated system not in equilibrium will always increase over time, approaching a maximum value at equilibrium.  The second law dictates the arrow of time.  
  4. Third Law – As temperature approaches absolute zero, the entropy of a system approaches a constant value.

The laws of thermodynamics was a pivotal moment in scientific history.  It revolutionized our understanding of heat and energy, and also laid the groundwork for many technological advancements from the Industrial Revolution to the present day.

Continue reading more about the exciting history of science!

1834: The Electric Motor

An early electric motor
An early electric motor

The invention of electric motor marked a pivotal moment in engineering history.  An electric motor is a devise that uses electricity to create a mechanical force.  There were earlier prototypes to electric motors but they were very weak and more of a spectacle than a working motor capable of producing useful work.

The invention of the electric motor can be said to have first occurred in May 1834 by Moritz Jacobi, although later that year Thomas Davenport independently created an electric motor also.  The Jacobi motor was capable of lifting weights of around eleven pounds at a speed of one foot per second – roughly fifteen watts of mechanical power. 

Early Contributors and Origins of the Electric Motor

Several inventions and discoveries were essential prior to building an electric motor.  As with most modern technological inventions, a synthesis between many ideas developed by different people over a period of time is required.  The roots can be traced back to the 19th century when scientists were beginning to understand and unlock the power of electricity.  

The history of the invention of the electric motor began with the discovery of electricity itself, thanks to experiments such as Benjamin Franklin’s famous kite experiment.  Once this phenomenon was discovered the next steps required a knowledge of storing electricity and then harnessing it to create motion. 

The battery, invented by Alessandro Volta in 1799, allowed for the storage of electrical energy and provided a continuous electric current.  Then in 1820 Hans Christian Orsted observed for the first time a mechanical movement caused by an electric current.  During a lecture he noticed a compass needle was deflected from north when an electric current was nearby.  This was an important step in confirming the relationship between electricity and magnetism.  Later that year in France Andre Marie Ampere showed that effect could be made stronger by coiling a wire.  In 1825 William Sturgeon invented the first electromagnet by wrapping a coil of wires around an iron core.

These experiments and discoveries taken together showed that electricity could be used to produce magnetism. Then, in 1831 Michael Faraday showed the reverse was also true – that magnetism could be used to produce electricity.  Faraday’s work on electromagnetic induction in the early 1830s truly laid the foundation for the creation of electric motors. Building on Faraday’s work in the 1830s was the American blacksmith and inventor, Thomas Davenport. He created a small electric motor by using a battery, a magnet, and a wire coil. His creation was one of the earliest practical electric motors and demonstrated the potential for using electricity as a sources of mechanical power.

An original Alternating Current (AC) Tesla Induction Motors, on display at the British Museum
An original Alternating Current (AC) Tesla Induction Motors, on display at the British Museum in London

Thanks to these important discoveries, electric motors began being invented all over the world.  Soon other scientists and inventors mad further advancements in electric motor technology. Each motor is designed and works differently but they all use the principles of electromagnetism and the power of the electromagnetic field. By the 1870s and 1880s, practical and efficient electric motors began to be widely developed and used. One of the most notable inventors in this period was Nikola Tesla, who made significant contributions to the development of alternating current (AC) motors. In 1888, he patented a design for a polyphase AC induction motor, which utilized two or more alternating currents out of phase with each other. This design allowed for efficient and reliable conversion of electrical energy into mechanical energy. Tesla’s work revolutionized the field of electrical engineering and played a crucial role in the widespread adoption of electric motors. His inventions and innovations greatly advanced the field of electric motor technology.

Impact on Society

The invention of the electric motor and its subsequent advancements have generated an enormous impact on modern society. Transportation and industrialization have been two been two of the most impacted areas. The electric motor played a crucial role in the industrial revolution, powering machinery and enabling mass production of goods. It eventually replaced steam engines as the dominant from of mechanical power in transportation, paving the way for vehicles such as cars, buses and trains.

More recently, electric motors have found their way into an assortment of household appliances that have becomes essential to modern life. They power refrigerators, washing machines, dishwashers, air conditioners, and many other devices. They are indispensable in renewable energy systems such as wind turbines and hydroelectric generators. Electric motors are at the core of automation and robotics as essential components in conveyor belts, assembly line systems, and other industrial automation applications. The electric motor has had a transformative effect on society, and to this day continues to drive technological advancements.

Continue reading more about the exciting history of science!

1799: The Battery

The battery has revolutionized the way we live, producing a reliable and portable power source for a wide range of products.  The concept of batteries has a rich history that spans centuries, possibly even millennia.  As far back as the 1st century AD, civilizations may have been experimenting with battery concepts, with artifacts discovered around Baghdad that resemble battery-like devices.  However, the true purpose of these devices remains controversial.

Alessandro Volta and the Birth of the Battery

Voltaic pile
A Voltaic Pile

At the turn of the 19th century electricity was becoming an increasing topic of study.  People were finding various ways to produce or store electric current but there was as of yet no way to produce a continuous flow of electricity.

In 1799 the Italian physicist Alessandro Volta solved this problem and first reported his findings in a letter to the Royal Society in 1800. Called a Voltaic pile, Volta stacked zinc, copper, and brine-soaked paper in layer after later, sometimes referred to as a voltaic cell.  Adding a wire to both ends produced an electric current, with additional layers creating a stronger current.  These additional layers stacked on top of each other created the voltaic pile.  Volta realized that somehow the piles of metal disks were producing the current, an effect called an electromotive force.  

In 1810 Humphry Davy showed that it was the chemical reaction between the two metals (electrode) and the liquid solution (electrolyte).  Various difference metals and solutions can be used all to create an electromotive force.

The voltaic pile marked a significant milestone in the history of electricity.  It was the first true electric battery, setting the stage for the development of modern batteries.

Unleashing the Power of Chemical Energy

A battery is then a chemical means of generating electricity.  It works through a “redox” reaction, which is the process of reduction (one substance gaining electrons) and the process of oxidation (one substance losing electrons) occurring simultaneously. To perform the redox reaction most batteries consist of two electrodes – an anode (negative electrode) and a cathode (positive electrode) – separated by an electrolyte, which is a conductive material. The electrodes are typically made of two different materials, and the electrolyte allows ions to flow between them while preventing contact. When the battery is connected to an external circuit, the redox chemical reaction occurs within the battery, producing the flow of electrons from the anode to the cathode, creating an electrical current that can be used to power devices.

Volta’s battery is called a wet battery.  While it did produce a controlled, continuous flow of electric current it was not very practical.  There were significant limitations with its capacity, size, and portability. The battery was a large and cumbersome device and had a low energy density compared to modern batteries. What it was however, was a groundbreaking invention that paved the way for further development of battery technology. It was not until a dry battery was invented, which replaced the liquid electrolyte with a paste, that batteries portable and practical.

The Rise of Practical Batteries

The battery has undergone significant changes in the past two centuries since the invention of the voltaic cell.  Here are a few notable improvements:

  • Daniell Cell: In 1836, the British scientist John Frederic Deniell invented a new cell using copper sulfate and zinc sulfate, separated by a porous pot, to create a longer-lasting source of power.
  • Lead-Acid Batteries: In 1859 the French physicist Gaston Plante invented the first rechargeable battery.
  • Leclanche Cell: In 1866, the French engineer Georges Leclanche patented a new kind of battery – a predecessor to the modern dry cell battery.  It used a zinc anode and a manganese dioxide cathode wrapped in a porous material, dipped in a jar of ammonium chloride.
  • Nickel-Cadmium Batteries: In 1899 the Swedish inventory Waldemar Jungner invented the first alkaline battery.
  • Lithium-ion Batteries: Lithium is the metal with the greatest electrochemical potential.  These batteries did not come to market until the 1970s and offer the highest densities and can hold its charge for the longest period of any battery. 
Electric Vehicle Battery System
A Modern Electric Vehicle Battery System
(Credit: aec.org)

Peering into the future batteries will increasingly play a role in society. Emerging technologies such as artificial intelligence (AI), renewable energy storage, advanced portable devices, and electric vehicles (EVs) will need powerful batteries to conveniently function. As a result many of these industries are investing heavily in battery technology. Improved battery technology will enable the widespread adoption of many of these technologies, removing their limitations and making the more appealing to consumers.

Continue reading more about the exciting history of science!

1796: The First Vaccination

Humanity has achieved countless medical breakthroughs over the centuries, yet few have had as profound and lasting impact as the invention of the first vaccine.  A vaccination is the process of administering biological preparation called a vaccine to stimulate the immune system and protect individuals from infectious diseases.  The primary purpose of the vaccine is to mimic the infection without causing the disease, although sometimes mild symptoms will occur for a brief period of time.  This will prime the immune system to recognize and respond effectively later if the person is exposed to the actual infection.

This remarkable achievement was first performed by the English physician Edward Jenner in 1796.  Despite recent controversies over vaccinations, this medical breakthrough had led to the prevention of many diseases and has undoubtedly saved countless lives.  

Edward Jenner and the Smallpox Threat

This first vaccination was developed against smallpox, a disease that had plagued humanity for thousands of years.  This highly contagious and often fatal disease caused high fever, severe skin rashes, and the formation of fluid-like blisters on the skin.  Smallpox had a mortality rate of up to 30%.  Outbreaks were common, leading to the loss of millions of lives.  Edward Jenner, an English physician and scientists, made his revolutionary discovery late in the 18th century when he developed a vaccine for smallpox.

Edward Jenner Vaccination
Edward Jenner Administering the First Vaccination

Earlier in the 18th century it was observed that people who suffered from a more benign form of cowpox became immune to smallpox.  Jenner had also observed that milkmaids who had contracted and subsequently recovered from cowpox did not appear to contract smallpox.  These observations led Jenner to hypothesize that the cowpox infections somehow helped to protect these people against smallpox.  In 1796 Jenner tested his hypothesis.  He took cowpox material from Sarah Nelmes, a milkmaid, and injected it into the arm of an eight year old boy, James Phipps.  The boy became sick for a few days but soon recovered.  Two months later he was exposed to smallpox and showed immunity to the disease, which lasted for the rest of his life.  It was the proof Jenner was needed.  He had successfully developed worlds the first vaccination, the word derived from the Latin word vacca, which means cow.

Two years later Jenner published An Inquiry into the Causes and Effects of the Variolae Vacciniae, which outlined Phipps vaccination as well as twenty two related cases.  Jenner’s publication soon generated much interest on the topic after subsequent vaccinations were reproduced by others.  His work laid the foundation for the science of immunology, leading the development of vaccines for many other diseases.  Over the coming decades advancements in microbiology and immunology led to the development of vaccines for several diseases, including polio, influenza, measles, mumps, rubella, HPV, and many others.  

Most recently the COVID-19 pandemic, caused by the SARS-Cov-2 virus, highlighted the need for vaccines.  In a remarkably short time multiple vaccines were developed and authorized for emergency use to combat the pandemic.  Governments launched vaccination campaigns globally to control the spread of the disease and reduce its impact on the public health.  

A Long-Lasting Global Impact

Jenner’s discovery of the vaccination was nothing short of revolutionary. Within a few years vaccinations spread around the world and were being endorsed by governments. As early as 1801 Russia supported the use of vaccinations and in 1802 Massachusetts became the first state to actively support their use as well. Today vaccinations provide a variety of public health benefits.  These benefits include:

  1. Disease Prevention:  The primary purpose of vaccines is disease prevention.  They work by stimulating the immune system to recognize and fight specific pathogens, reducing the likelihood of infection.
  2. Reduced Morbidity and Mortality:  Vaccines reduce the incidence of diseases, hospitalizations, and deaths.  Additionally, after a large enough portion of the population is vaccinated, herd immunity is achieved, protecting even those who have not been vaccinated.
  3. Elimination of Diseases: Vaccinations have played a paramount role in the elimination or near elimination of many diseases, beginning with smallpox.  In 1980, the world was officially declared free from this deadly disease.  Polio is another disease on the verge of elimination.  
  4. Various Economic Benefits:  By preventing illnesses, vaccines reduce healthcare costs associated with treating infectious diseases.  They also minimize lost productivity due to illness in the workplace.
  5. Prevention of Outbreaks:  Vaccines are critical in preventing outbreaks of infectious diseases.  

It is because of these and numerous other benefits that vaccines are considered one of the most successful and cost-effective public health measures.

Continue reading more about the exciting history of science!

1897: Electron

An atom showing its protons, neutrons, and electrons.
Nuclear Structure of the Atom

The atomic theory of the atom was proposed by John Dalton in the early 19th century. Dalton claimed that all atoms of the same element are identical, but that atoms of different elements vary in size and in mass. His theory suggested that atoms were indivisible particles, making them the smallest building blocks of matter. However by the middle of the 19th century, a growing body of experiment evidence began to challenge this notion. As the century drew to a close some scientists that speculated that atoms may be composed of additional fundamental units, and by the late 19th century convincing evidence began to emerge from experimental research to support this hypothesis.   The discovery of the electron was the first of a series of discovery’s, spanning a few decades, that identified the major subatomic particles of the atom.

J. J. Thompson’s Experiments

This experimental evidence came during the years 1894-1899 when J. J. Thomson conducted research with cathode ray tubes, the same technology that also played a critical role in the discovery of X-rays and on work which led to the discovery of radioactivity.  Cathode rays are the currents of electricity observed inside a high vacuum tube.  When two electrodes are connected to each end of the tube and voltage is supplied, a beam of particles flows from the negatively charged electrode (the cathode) the positively charged electrode (the anode).  In a lecture to the Royal Institution on April 30, 1897, J. J. Thomson suggested that these beams of particles were smaller, more fundamental units of the atom.  He termed them ‘corpuscles’ but the name never stuck, and they were eventually given name we are familiar with today: electrons. 

J. J. Thomson's cathode ray tube used to discover the electron
J. J. Thomson’s cathode ray tube used to discover the electron
(Credit: Donald Gillies)

J.J. Thomson performed several experiments whose conclusions supported his hypothesis.  Firstly, in 1894 Thomson established that cathode rays were not a form of electromagnetic radiation, the assumption at that time, by showing that they much move slower than the speed of light. Soon after he conducted experiments deflecting the rays from negatively charged electric plates to positively charged plates where he was able to show that the beams were streams of negatively charged particles.  In another experiment he used magnets to deflect the beams which allowed him to determine their mass-to-charge ration.  He approximated their mass at 1/2000th of a hydrogen atom indicating that they must be only a part of an atom.  This is an incredibly small mass and is the smallest measured mass of any particle that has mass.  Lastly, he showed that these particles are present in different types of atoms.

Diagram of a cathode ray tube
Diagram of a cathode ray tube

The revelation that atoms are made of smaller constituent units revolutionized how scientists viewed the atom world and spurred research on nuclear particles.  Soon after the atomic nucleus was discovered, and the field of nuclear physics was born.  Thomson went on to create one of the first models of the atom, which was called the plum pudding model.  He knew that atoms had an overall neutral charge.  Therefore, his model depicted the negatively charged electrons floating in a “soup” of positively charged protons.  It was a good first attempt at designing a model of the atom but was soon discarded for Ernest Rutherford’s nuclear model of the atom based on the results of his gold foil experiment.  

Impact and Legacy

The discovery of the electron had profound effects on both theoretical and applied science.

The discovery of the electron helped to usher in the era of atomic physics and help to give birth to the completely new and foreign field of quantum mechanics.   Both of these fields are closely related and describe the behavior of particles at the atomic and subatomic level.  Both fields rely on an understanding of atomic structure, of which electrons are a key component.

The discovery of the electron also had a fundamental impact on applied science as it laid the foundation for the development of electronics, a technology that would revolutionize our world.  Electrons, being charge carries, are the fundamental working units of electronic components such as capacitors, diodes, resistors, and transistors.  They are used in all of the familiar electronic devices such as televisions, smartphones, and computers and have made possible the digital transformation of our civilization. In addition to electronics, electrons are involved in atomic spectroscopy, which is the study of the interaction between light and matter. By studying energy levels and transitions of electrons, atomic physicists can identify elements, determine their properties, and study their behavior in various conditions. Spectroscopy is the method used by astronomers to determine the temperature, chemical composition, luminosity, and other characteristics of distant stars across the universe.

Continue reading more about the exciting history of science!

1730: The Marine Chronometer

Marine Chronometer
A Marine Chronometer

Amazing that it may seem to people living in the 21st century, finding reliable longitudinal position at sea was not possible until 1730 when John Harrison invented the marine chronometer, a timepiece c

Maritime travel and trade was rapidly expanding in the time leading up to the 18th century.  The discovery of the North and South American continents by European explorers resulted in transoceanic voyages being made for the first time.  This meant a majority of time spent at sea spent out of sight of any landmass, making it trickier to accurately navigate the voyage.

The principle unresolved problem in these transoceanic voyages was finding reliable a longitudinal position while at sea.  This was not possible until 1730 when John Harrison invented the marine chronometer, a timepiece capable of keeping accurate time of a known, fixed location.  

The Longitude Problem

To determine a location one needs to know both the latitude and longitude of the location.  Latitude could easily be measured by using the sun or the stars.  Longitude was more difficult in that it could be calculated by comparing two accurate times – one of a known longitude (a Prime Meridian) and the other at any other location.  A little math works out the rest.  The Earth makes one full rotation per day (360º of longitude) and therefore turns one degree of longitude in 1/360th of a day, or every four minutes.  Work out the time distance from your location to the Prime Meridian, and you know your degree of longitude from the Prime Meridian.

Therefore the trick to determining longitude at sea then required an accurate timekeeping device that had to work on a ship.  However the only known accurate timekeeping devices of the time used a pendulum, which swayed as the boat rocked at sea.

Early methods of measuring longitude proved to be inaccurate, sometimes with deadly consequences.  The most common technique was called dead reckoning.  Beginning with a know starting location, the longitude was simply estimated by the captain based on a number of factors such as current and wind speed, direction of travel, and other factors.  The result was at best a close approximation with compounding errors decreasing the accuracy over time and distance.

A series of naval disasters, most notably a 1707 wreck of four British war ships that saw a loss of over 1500 lives, prompted the British government into action.  The Longitude Act of 1714 provided an incentive to solve this problem by offering a longitude prize ranging from £10,000 to £20,000 to anyone who could provide a simple and practical method to accurately determine a ship’s longitude at sea within one half of a degree.  Four years later the Academie de Paris offered a similar prize.  The race was on to solve the problem. 

John Harrison and the Marine Chronometer

Born in Yorkshire and a carpenter by profession, John Harrison invented a mechanized timekeeping machine that solved this problem which he called a chronometer.  This device was, in a sense, the worlds first global positioning device.  In 1730 he began working on his first prototype which he called H1.  The project took him over five years to complete and he presented his device to the Board of Longitude in 1736.  He was granted a sea trial – the first given by the board – and the device performed well.  He was award small grant for further development and Harrison set to work on his H2 device.

A series of Marine Chronometer devices made by John Harrison - H1 to H4
A series of Marine Chronometer devices made by John Harrison – H1 to H4

Over several years Harrison further refined his sea clock in the form of his H2 and H3 devices, but came to realize the device was fundamentally flawed.  He shifted gears and went from making a sea clock to a sea watch.  He realized that some watches would keep time as accurate as his larger sea clocks and were much more practical for sailing.  This lead to his most famous device the H4, which kept nearly perfect time, was around five inches in diameter.  

Harrison’s H4 watch essentially was a large pocket watch that was wound daily.  It possessed a 30-hour power supply. The main technological breakthrough of all his devices was a spring driven mechanism that replaced the pendulum.  The smaller watch allowed for a higher frequency of the balance, making it more accurate than his clocks. Various combinations of metals were used in the watch to overcome the deleterious effects of humidity and temperature change.  The watch took six years to construct, was completed in 1759 and tested in 1761.  The watch passed the test with remarkable accuracy.  It lost a mere five seconds on an 81 day voyage to the West Indies and back.  After some wrangling Harrison was able to receive his prize from the British government for the design. 

Continue reading more about the exciting history of science!

1712: The Newcomen Engine

In 1712 Thomas Newcomen unknowingly ushered the world into the industrial revolution when he built an “atmospheric” engine to pump up water from a coal mine near Dudley Castle in England.  The Newcomen engine, as it came to be known, can thus be considered one of the most influential inventions in all of history.

A Novel Solution to a New Industrial Problem

The Newcomen Engine
The Newcomen Engine

The demand for coal was steadily increasing in the early 18th century and coal miners were having to dig deeper and deeper into to ground to gather it. Flooding of these ever deeper coal mines was becoming a problem to the point that manual and horse powered pumping was becoming an inadequate solution. At the dawn of the industrial revolution, and industrial machine was needed to solve the problem.

An ironmonger named Thomas Newcomen by combining ideas of various precursor engines. About a decade earlier an English inventor named Thomas Savery patented a steam powered pump which was not technically an engine because it had no moving parts. At around the same time the French physicist Denis Papin was conducting experiments using steam cylinders and pistons. Newcomen combined these ideas to eventually solve the problem by inventing the world’s first practical fuel-burning engine in 1712.

The Newcomen engine was a large, lumbering, and inefficient engine that did its work not by the power of steam but by the force of atmospheric pressure.  The discovery of the vacuum in the prior century showed the power of atmospheric pressure and this principle was utilized in the Newcomen engine. This engine was a very complex device despite being predicated on rather simple principles. Its basic method of operation goes as follows. A boiler created the steam that was pumped into a cylinder where the steam was then condensed by cold water. This process of heating and then cooling created a vacuum inside the cylinder.  The resulting atmospheric pressure created inside the cylinder forced a piston downward, pulling the pump upward and thus removing the water out of the mine.  The boiler created more steam pushing the piston upward where more cold water was introduced into the cylinder and the cycle was repeated around twelve times per minute.

Searching for Improvements in Energy Efficiency

What the Newcomen engine possessed in revolutionary status it lacked in efficiency.  The engine was highly inefficient and originally only used in coal mines where a power source was abundant and nearby.  However despite the engineering drawbacks its usefulness proved quite valuable – over 75 were built during Thomas Newcomen’s lifetime and over 1000 were in use by the end of the century.  They quickly spread across most of Europe and to America. The problem with the engine continued to be in its lack of efficiency. It was becoming difficult to operate in area’s where coal was expensive or in low supply.

Still, the Newcomen engine remained largely unchanged and in wide use for most of the 18th century. These engines saw an efficiency improvement later in the century by James Watt. In 1764 Watt was repairing a Newcomen engine when he became fixated on the amount of coal needed to operate the engine because it wasted so much heat. After mulling over a solution for some time he realized that much of the inefficiency had to deal with the heating and the cooling of the steam in the cylinder. He concluded that it would be much more efficient to keep the cylinder heated about boiling points at all times and to have a separate condenser for cooling. In 1769 he acquired a patent for his new device and soon after entered into a partnership with Matthew Boulton. The Boulton & Watt steam engines quickly became the best in the world. Later on Watt designed a double acting steam engine by allowing steam to enter the cylinder at both ends providing for both up and down power strokes. This was now a true steam engine.

Impact of the Newcomen Engine

The invention of the Newcomen engine marked the beginning of the Industrial Revolution. Prior to the Newcomen engine most power was supplied by natural sources such as wind, water, and human and animal muscle. The newcomen engine and the subsequent improvements made by James Watt paved the way for steam powered transportation in the form of boats and railroads. Electricity and the internal combustion engine would eventually replace powering engines for transportation in the 20th century, but even to day we still most of our electricity from steam powered turbines.

Continue reading more about the exciting history of science!