Energy and the Human Journey: Where We Have Been; Where We Can Go

By Wade Frazier


Version 1.2, published May 2015.  Version 1.0 published September 2014.

Note to Readers: This essay is more easily navigated with a browser other than Internet Explorer, such as Firefox.  This essay has internal links to this essay and to other essays on my website, with external links largely to Wikipedia and scientific papers.  I have published this essay in other formats: .pdf format (10.7 megabytes) and .pdf format without visible links (the closest experience to reading a book), to honor different methods of digesting this essay, but this html version comprises the online textbook that I intended this essay to be.


Acronyms Used in This Essay

Summary and Purpose

This Essay’s Tables and Timelines

Energy and the Industrialized World

The Toolset of Mainstream Science

The Orthodox Framework and its Limitations

Energy and Chemistry

Timelines of Energy, Geology, and Early Life

The Formation and Early Development of the Sun and Earth

Early Life on Earth

The Cryogenian Ice Age and the Rise of Complex Life

Speciation, Extinction, and Mass Extinctions

The Cambrian Explosion

Complex Life Colonizes Land

Making Coal, the Rise of Reptiles, and the Greatest Extinction Ever

The Reign of Dinosaurs

The Age of Mammals

Mid-Essay Reflection

The Path to Humanity

Tables of Key Events in the Human Journey

Humanity’s First Epochal Event(s?): Growing our Brains and Controlling Fire

Humanity’s Second Epochal Event: The Super-Predator Revolution

Humanity’s Third Epochal Event: The Domestication Revolution

Epochal Event 3.5 – The Rise of Europe

Humanity’s Fourth Epochal Event: The Industrial Revolution

Epochal Event 4.5 – The Rise of Oil and Electricity

The Postwar Boom, Peak Oil, and the Decline of Industrial Civilization

What Running out of Energy Looks Like

My Adventures and Those of My Fellow Travelers

Humanity’s Fifth Epochal Event: Free Energy and an Abundance-Based Political Economy

The Sixth Mass Extinction or the Fifth Epochal Event? 

What Has Not Worked So Far, and What Might





This essay is dedicated to the memory of Mr. Professor and Brian, two great men whom it was an immense privilege to know and who spent their lives in a quest for healing this world.  I miss them.


Acronyms Used in This Essay

A number of acronyms in this essay are not commonly used and at least one is unique to my work.

They are:

BYA – Billion Years Ago

MYA – Million Years Ago

KYA – Thousand Years Ago

PPM – Parts Per Million

FE – Free Energy

GC – Global Controller

EROI – Energy Return on Investment

UP – The Universal People

LUCA – Last Universal Common Ancestor

ATP - Adenosine Triphosphate

GOE – Great Oxygenation Event

BIF – Banded Iron Formation

ROS – Reactive Oxygen Species

PETM - Paleocene-Eocene Thermal Maximum


Summary and Purpose

Chapter summary:

I was born in 1958.  NASA recruited my father to work in Mission Control during the Space Race, and I was trained from childhood to be a scientist.  My first professional mentor invented as Nikola Tesla did, and among his many inventions was an engine hailed by a federal study as the world’s most promising alternative to the internal combustion engine.  In 1974, as that engine created a stir in the USA’s federal government, I began dreaming of changing the energy industry.  In that same year, I had my cultural and mystical awakenings.  During my second year of college, I had my first existential crisis and a paranormal event changed my studies from science to business.  I still held my energy dreams, however, and in 1986, eight years after that first paranormal event, I had a second one that suddenly caused me to move up the coast from Los Angeles to Seattle, where I landed in the middle of what is arguably the greatest attempt yet made to bring alternative energy to the American marketplace.  The company sold the best heating system that has ever been on the world market and it placed that system for free on customers’ homes by using the most ingenious marketing plan that I ever saw.  That effort was killed by the local electric industry, which saw our technology as a threat to its revenues and profits, and my wild ride began.  The owner of the Seattle business left the state to rebuild his effort.  I followed him to Boston and soon became his partner.  My partner's experiences in Seattle radicalized him.  My use of "radical" intends to convey the original "going to the root" meaning.  Radicals seek a fundamental understanding of events (so they aim for the root and do not hack at branches), but more economically than politically in my partner's instance.  He would never see the energy industry the same way again after his radicalization (also called "awakening") in Seattle, but he had more radicalization ahead of him.

The day after I arrived in Boston, we began to pursue what is today called free energy, or new energy, which is abundant and harmlessly produced energy generated with almost no operating cost.  Today's so-called free energy is usually generated by harnessing the zero-point field, but not always, and our original effort was not trying to harness it.  We attracted the interest of a legendary and shadowy group while we were in Boston.  They offered $10 million for the rights to our fledgling technology.  I have called that group the Global Controllers and others have different terms for them.  However, they are not the focus of my writings and efforts.  I regard them as a symptom of our collective malaise, not a cause.  Our fate is in our hands, not theirs.  Our efforts also caused great commotion within New England’s electric industry and attracted attempts by the local authorities to destroy our business.  They were probably trying to protect their economic turf and were not consciously acting on the Global Controllers’ behalf, which was probably also the case in Seattle.

In 1987, we moved our business to Ventura, California, where I had been raised, before the sledgehammer in Boston could fall on us.  We moved because I had connected us with technologies and talent that made our free energy ideas potentially feasible.  Our public awareness efforts became highly successful and we were building free energy prototypes.  In early 1988, our efforts were targeted by the local authorities, again at the behest of energy interests, both local and global.  In a surprise raid in which the authorities blatantly stole our technical materials, mere weeks after those same authorities assured us that we were not doing anything illegal, my radicalization began.  A few months later, my partner was offered about $1 billion to cease our operations by that shadowy global group; the CIA delivered that offer.  Soon after my partner refused their offer, he was arrested with a million dollar bail and our nightmare began.  The turning point of my life was when I became the defense’s key witness and the prosecution made faces at me while I was on the witness stand as they tried to intimidate me.  It helped inspire me to sacrifice my life in an attempt to free my partner.  My gesture incredibly worked, in the greatest miracle that I ever witnessed.  I helped free my partner, but my life had been ruined by the events of 1988, and in 1990 I left Ventura and never returned.  I had been radicalized ("awakened"), and I then spent the next several years seeking understanding of what I had lived through and why the world worked starkly differently from how I was taught that it did.  I began the study and writing that culminated in publishing my first website in 1996, which was also when I briefly rejoined my former partner after he was released from prison, after the courts fraudulently placed him there and prison officials repeatedly put him in position to be murdered.  The Global Controllers then raised their game to new, sophisticated levels and I nearly went to prison.

As I discovered the hard way, contrary to my business school indoctrination, there is little that resembles a free market in the USA, particularly in its energy industry, and there has never been a truly free market, a real democracy, a free press, an objective history, a purely pursued scientific method, or any other imaginary constructs that our dominant institutions promote.  They may all be worthy ideals, but none has existed in the real world.  Regarding free markets in the energy industry, reality has effectively been inverted, as the world’s greatest effort of organized suppression prevents alternative energy technology of any significance from public awareness and use. 

Soon after I moved from Ventura, I met a former astronaut who was hired by NASA with a Mars mission in mind and was investigating the free energy field.  We eventually became colleagues and co-founded a non-profit organization intended to raise public awareness of new energy.  A few days after we began planning the organization’s first conference in 2004, the first speaker that we recruited for our conference was murdered and my astronaut colleague immediately and understandably moved to South America, where he spent the rest of his life.  In the spring of 2013, I spent a few days with my former free energy partner and, like my astronaut colleague, he had also been run out of the USA after mounting an effort around high-MPG carburetor technology.  The federal government attacked soon after a legendary figure in the oil industry contacted my partner, who also attracted the attention of the sitting USA’s president.  Every American president since Ronald Reagan knew my partner by name, but they proved to be rather low-ranking in the global power structure.

My astronaut colleague investigated the UFO phenomenon early in his adventures on the frontiers of science and nearly lost his life immediately after refusing an "offer" to perform classified UFO research for the American military.  It became evident that the UFO and free energy issues were conjoined.  A faction of the global elite demonstrated some of their exotic and sequestered technologies to a close fellow traveler, which included free energy and antigravity technologies.  My astronaut colleague was involved with the same free energy inventor that some around me were, who invented a solid-state free energy prototype that not only produced a million times the energy that went into it, but it also produced antigravity effects.  I eventually understood the larger context of our efforts and encountered numerous fellow travelers; they reported similar experiences, of having their technologies seized or otherwise suppressed, of being incarcerated and/or surviving murder attempts, and other outrages inflicted by global elites as they maintained their tyrannical grip over the world economy and, hence, humanity.  It was no conspiracy theory, but what my fellow travelers and I learned at great personal cost, which was regularly fatal.

I continued to study and write and became my astronaut colleague’s biographer.  My former partner is the Indiana Jones of the free energy field, but I eventually realized that while it was awe-inspiring to witness his efforts, one man with a whip and fedora cannot save humanity from itself.  I eventually took a different path from both my partner and astronaut colleague, and one fruit of that direction is this essay.  Not only was the public largely indifferent to what we were attempting, but those attracted to our efforts usually either came for the spectacle or were opportunists who betrayed us at the first opportunity.  As we weathered attacks from the local, state, national, and global power structures, such treacherous opportunities abounded.  I witnessed dozens of attempts by my partner’s associates to steal his companies from him (1, 2, 3, 4), and my astronaut colleague was twice ejected from organizations that he founded, by the very people that he invited to help him.  During my radicalizing years with my partner, I learned that personal integrity is the world’s scarcest commodity, and it is the primary reason why humanity is in this predicament.  The antics of the global elites are of minor importance; the enemy is us.

I eventually realized that there were not enough heroes on Earth to get free energy over the hump of humanity’s inertia and organized suppression.  Soon after I completed my present website in 2002, one of R. Buckminster Fuller’s pupils called my writings “comprehensivist” and I did not know what he meant.  I then read some of Fuller’s work and saw the point.  My writings since then have been more consciously comprehensivist (also called “generalist”) in nature. 

This essay is intended to draw a comprehensive picture of life on Earth, the human journey, and energy's role.  The references that support this essay are usually to works written for non-scientists or those of modest academic achievement so that non-scientists can study the same works without needing specialized scientific training.  I am trying to help form a comprehensive awareness in a tiny fraction of the global population.  Between 5,000 and 7,000 people is my goal.  My hope is that the energy issue can become that tiny fraction's focus.  Properly educated, that group might be able to help catalyze an energy effort that can overcome the obstacles.  That envisioned group may help humanity in many ways, but my primary goal is manifesting those technologies in the public sphere in a way that nobody risks life or livelihood.  I have seen too many wrecked and prematurely ended lives (1, 2) and plan to avoid those fates, for both myself and the group’s members.

Here is a brief summary of this essay.  Ever since life first appeared more than three billion years ago and about a billion years after the Sun and Earth formed, organisms have continually invented more effective methods to acquire, preserve, and use energy.  Complex life appeared after three billion years of evolution and, pound-for-pound, it used energy 100,000 times as fast as the Sun produced it.  The story of life on Earth has been one of evolutionary events impacted by geophysical and geochemical processes, and in turn influencing them.  During the eon of complex life that began more than 500 million years ago, there have been many brief golden ages of relative energy abundance for some fortunate species, soon followed by increased energy competition, a relatively stable struggle for energy, and then mass extinction events cleared biomes and set the stage for another golden age by organisms adapted to the new environments.  Those newly dominant organisms were often marginal or unremarkable members of their ecosystems before the mass extinction.  That pattern has characterized the journey of complex life over the past several hundred million years.  Intelligence began increasing among some animals, which provided them with a competitive advantage.

The oldest stone tools yet discovered are about 3.3-3.4 million years old, likely made by australopiths, which may have led to the appearance of the Homo genus.  About 2.6 million years ago, when our current ice age began, our ancestors began making Oldowan culture stone tools, which was soon followed by the control of fire, and the human journey’s First Epochal Event(s?) transpired.  The human evolutionary line’s brain then grew dramatically.  About two million years later, the human line evolved to the point where behaviorally modern humans appeared, left Africa, and conquered all inhabitable continents.  Their expansion was fueled by driving most of Earth’s large animals to extinction.  That Second Epochal Event was also the beginning of the Sixth Mass Extinction.  After all the easy meat was extinct and the brief Golden Age of the Hunter-Gatherer ended, population pressures led to the Third Epochal Event: domesticating plants and animals.  That event led to civilization, and many features of the human journey often argued to be human nature, such as slavery and the subjugation of women, were merely artifacts of the energy regime and societal structure of agrarian civilizations.  Early civilizations were never stable; their energy practices were largely based on deforestation and agriculture, usually on the deforested soils, and such civilizations primarily collapsed due to their unsustainable energy production methods. 

As the Old World’s civilizations continually rose and fell, Europe's peoples rediscovered ancient teachings that contained the first stirrings of a scientific approach.  Europeans used energy technologies from that ancient period, borrowed novel energy practices from other Old World civilizations, and achieved the technological feat of turning the world’s oceans into a low-energy transportation lane.  Europeans thereby began conquering the world.  During that conquest, one imperial contender turned to fossil fuels after its woodlands were depleted by early industrialization.  England soon industrialized by using coal and initiated humanity’s Fourth Epochal Event.  England quickly became Earth’s dominant imperial power, riding on the power of coal.  As Europeans conquered Earth, elites, who first appeared with the first civilizations, could begin thinking in global terms for the first time, and a global power structure began developing.  As we learned the hard way, that power structure is very real, but almost nobody on Earth has a balanced and mature perspective regarding it, as people either deny its existence or obsess about it, seeing it as the root of our problems when it is really only a side-effect of humanity’s current stage of political-economic evolution, which has always been based on its level of energy usage

Today, industrialized humanity is almost wholly dependent on the energy provided by hydrocarbon fuels that were created by geological processes operating on the remains of organisms, and humanity is mining and burning those hydrocarbon deposits about a million times as fast as they were created.  We are reaching peak extraction rates but, more importantly, we have already discovered all of the easily acquired hydrocarbons.  We are currently seeking and mining Earth’s remaining hydrocarbon deposits, which are of poor energetic quality.  It is merely the latest instance of humanity's depleting its energy resources, in which the dregs were mined after the easily acquired energy was consumed.  The megafauna extinctions created the energy crisis that led to domestication and civilization, and the energy crisis of early industrialization led to using hydrocarbon energy, and the energy crisis of 1973-1974 attracted my fellow travelers and me to alternative energy.  However, far more often over the course of the human journey, depleting energy resources led to population collapses and even local extinctions of humans in remote locations.  Expanding and collapsing populations have characterized rising and falling polities during the past several thousand years, ever since the first civilizations appeared.

Today, humanity dominates Earth and is not only depleting its primary energy resources at prodigious rates, but it is also driving species to extinction at a rate that rivals the greatest mass extinctions in Earth’s history.  Humans may cause Earth’s greatest mass extinction, which may take humanity with it.  Today, humanity stands on the brink of the abyss, and almost nobody seems to know or care.  Humanity is a tunnel-visioned, egocentric species, and almost all people are only concerned about their immediate self-interest and are oblivious of what lies ahead.  Not all humans are so blind, and biologists and climate scientists, among others intimately familiar with the impacts of global civilization, are terrified by what humanity is inflicting onto Earth.  Also, those who realize that we are quickly coming to the Hydrocarbon Age’s end are beating the drums of doom and I cannot blame them.  We are in a “race of the catastrophes” scenario, and several manmade trends threaten our future existence.

Even the ultra-elites who run Earth from the shadows readily see how their game of chicken with Earth may turn out.  Their more extreme members advocate terraforming Mars as their ultimate survival enclave if their games of power and control make Earth uninhabitable.  But the saner members, who may now be a majority of that global cabal, favor the dissemination of those sequestered technologies.  I am nearly certain that members of that disenchanted faction are those who gave my close friend an underground technology demonstration and who would quietly cheer our efforts when I worked with my former partner.  They may also be subtly supporting my current efforts, of which this essay comprises a key component, but I have not heard from them and am not counting on them to save the day or help my efforts garner success.  It is time for humanity to reach the level of collective sentience and integrity required to manifest humanity’s Fifth Epochal Event, which will initiate the Free Energy Epoch.  Humanity can then live, for the first time, in an epoch of true and sustainable abundance.  It could also halt the Sixth Mass Extinction and humanity could turn Earth into something resembling heaven.  With the Fifth Epochal Event, humanity will become a space-faring species, and a future will beckon that nobody on Earth today can truly imagine, just as nobody on Earth could predict how the previous Epochal Events transformed the human journey (1, 2, 3, 4).

Also, each Epochal Event was initiated by a small group of people, perhaps even by one person for the earliest events, and even the Industrial Revolution and its attendant Scientific Revolution had few fathers.  However, I came to realize that there is probably nobody else on Earth like my former partner, and even Indiana Jones cannot save the world by himself.  With the strategy that I finally developed, I do not look for heroes because I know that there are not enough currently walking Earth.  I am attempting something far more modest.  The greatest triumph of the ultra-elites running Earth today is making free energy technology and the resulting epoch of abundance unimaginable, and all of today’s dominant ideologies assume scarcity in the foundation of their frameworks, which is largely why my former partner and my astronaut colleague were voices in the wilderness and like ducks in a shooting gallery that did not know where the next shot would come from.  The most damaging shots were usually fired by their “allies,” right into their backs, which nobody could have convinced me of in 1985.  But after watching similar scenarios play out dozens of times, I finally had to admit the obvious, and my partner admitted it to me in 2013.

I noticed several crippling weaknesses in all alternative energy efforts that I was involved with or witnessed.  Most importantly, when my partner mounted his efforts, people participated primarily to serve their self-interest.  While the pursuit of mutual self-interest is the very definition of politics, self-interested people were easily defeated by organized suppression, although the efforts usually self-destructed before suppression efforts became intense.  Another deficiency in all mass free energy efforts was that most participants were scientifically illiterate and did not see much beyond the possibility of reducing their energy bills or becoming rich and famous.  Once the effort was destroyed (and they always are, if they have any promise), the participants left the alternative energy field.  Also, many lives were wrecked as each effort was defeated, so almost nobody was able or willing to try again.  Every time that my partner rebuilt his efforts, it was primarily with new people; few individuals lasted for more than one attempt.

I realize that almost nobody on Earth today can pass the integrity tests that my fellow travelers were subjected to, and I do not ask that of anybody whom I will attempt to recruit into my upcoming effort.  It will be a non-heroic approach, of “merely” achieving enough heart-centered sentience and awareness to where a world of free energy and abundance is only imagined by a sizeable group who will not stay quiet about it, but who will also not be proselytizing.  If they can truly understand this essay’s message, they will probably not know anybody else in their daily lives that can

Those recruits will simply be singing a song of practical abundance that will attract those who have been listening for that song for their entire lives.  Once enough people know the song by heart and can sing it, and have attracted a large enough audience that can approach the free energy issue in a way that risks nobody’s life and will not be easy for the provocateurs and the effort’s “allies” to wreck, then it will be time to take action, but in a way never tried before

That is my plan, and this essay is intended to form the foundation of my efforts to educate and amass the “choir” that will sing the abundance song.  I am looking for singers, not soldiers, and the choir will primarily sing here.  My approach takes the lamb’s path, not the warrior’s.  That “choir” may only help a little, it may help a lot, but it will not harm anybody.  This effort could be called trying the enlightenment path to free energy, an abundance-based global political economy, and a healed humanity and planet.  I believe that the key is approaching the issue as creators instead of victims, from a place of love instead of fear.  Those goals may seem grandiose to the uninitiated, and people in this field regularly succumb to a messiah complex and harbor other delusions of grandeur, but I also know that those aspirations are attainable if only a tiny fraction of humanity can help initiate that Fifth Epochal Event, just like the previous Epochal Events.  This essay is designed to begin the training process.  Learning this material will be a formidable undertaking.  This material is not designed for those looking for quick and easy answers, but is intended to help my readers attain the levels of understanding that I think are necessary for assisting with this epochal undertaking.


This Essay’s Tables and Timelines

In order to make this essay easier to understand, I created some tables and timelines, and they are:

Timeline of Significant Energy Events in Earth's and Life's History

Abbreviated Geologic Time Scale

Timeline of Earth’s Major Ice Ages

Timeline of Earth’s Major and Minor Mass Extinction Events

Early Earth Timeline before the Eon of Complex Life

Timeline of Key Biological Innovations in the Eon of Complex Life

Timeline of Humanity’s Evolutionary Heritage

Human Event Timeline Until Europe Began Conquering Humanity

Human Event Timeline Since Europe Began Conquering Humanity

Table of Humanity’s Epochs


Energy and the Industrialized World

There are greater contrasts in humanity’s collective standard of living than ever before.  As of 2014, Bill Gates topped the list of the world’s richest people for nearly all years of the previous 20.  In 2000, his net worth was about $100 billion, or about the same as the collective wealth of the poorest hundred million Americans or the poorest half of humanity.  Although Gates and other high-technology billionaires can live surprisingly egalitarian lifestyles, for one person to possess the same level of wealth as billions of people collectively is a recent phenomenon.  In 2014, about 30 thousand children died each day because of their impoverished conditions. 

Ever since I was thrust into an urban hell soon after graduating from college, I became a student of wealth, poverty, and humanity’s problems.  My teenage dreams of changing humanity’s energy paradigm have had a lifelong impact.  It took me many years to gain a comprehensive understanding of how energy literally runs the world and always has.  A good demonstration of that fact is to consider the average day of an average American professional, who is a member of history’s most privileged large demographic group and lives in Earth’s most industrialized nation.  A typical day in my life during the winter before I wrote this essay can serve as an example.

When I worked 12-hour days and longer during that winter, which was the busiest time of my year, I often fasted and needed less sleep, so I often awoke before 5:00 A.M.  In 2014 as I write this, I live in a fairly large house.  When I fast, my body generates less heat, so I feel cold rather easily; I wear thermal underwear under my work attire and have other strategies for staying warm, especially in the winter.  I programmed our furnace to begin operation soon before I awakened, so that my day started in a warm environment.  I also have a space heater in my home office, so that the rest of the house can stay cold while I work in warmth.

That winter, my first tasks when arising were turning on my computer and drinking a glass of orange juice, which raised my blood sugar.  After some hours of reading about world events, answering emails, and working on my writings, I took a hot shower, dressed, and walked to a bus stop.  I read a book while awaiting the bus that took me to downtown Bellevue, where I worked in a high-rise office building for an Internet company. 

When I arrived at my office, I turned on my lights and computer.  When I was eating, I put the food that I brought to work in a refrigerator under my desk.  During my work day, I interacted with many people in my air-conditioned, high-technology office environment.  My cellular telephone was never far away.  The view from my office window of the Cascade Mountains was pleasant.  My computer interfaced with our distant data centers and the world at large via the Internet.  When my workday was finished, I rode the bus home.  In the winter, the furnace is programmed to stop functioning when my wife and I leave for work, and comes on soon before we arrive home, so we never experienced a cold house.  In the evening, we might watch a movie on a DVD on our wide-screen plasma TV.  When I am not fasting, I usually eat dinner, with the food in my refrigerator usually purchased at a cooperative grocery store that has an enormous produce section, with food grown locally and imported from as far away as New Zealand, China, and Israel.[1]  We have a high-tech kitchen, with a “smart” stove, refrigerator, and other appliances.

When I resumed my career in 2003, I became an early riser and consequently went to bed by 9:00 PM on most nights, and often read fantasy literature before I turned out the lights and snuggled into bed (with two comforters in the winter to keep us warm as we sleep).

That was a typical winter’s day in early 2013.  During that day, around 80 times the calories that fueled my body were burned to support my activities.  Those dying children often succumbed to hunger and diseases of poverty, and the daily energy that supported their lives was less than 1% of what I enjoyed that day.  How did energy serve my daily activities?  How did that disparity between the dying children and me come to be?  This essay will address those questions.


The Toolset of Mainstream Science

Humanity is Earth's leading tool-using species, and our tools made us.  Twigs, sticks, bones, and other organic materials were undoubtedly used as tools by our protohuman ancestors, but the only tools to survive for millions of years to be studied today are made of stone; the oldest discovered so far are about 3.4-to-3.3 million years old.[2]  Humanity’s tools have become increasingly sophisticated since then.  The Industrial Revolution was accompanied by the Scientific Revolution, and the synergy between scientific and technological advances has been essential and impressive, even leaving aside the many technologies and related theories that have been developed and sequestered in the above-top-secret world.

The history of science is deeply entwined with the state of technology.  Improving technology allowed for increasingly sophisticated experiments, and advances in science spurred technological innovation.  While many scientific practices and outcomes have been evil, such as vivisection and nuclear weapons, many others have not been destructive to humans or other organisms.  The 20th century saw great leaps in technological and scientific advancement.  My grandfather lived in a sod hut as a child, his son helped send men to the moon, and his grandson pursued world-changing energy technologies and still does.  Relativity and quantum theory ended the era of classical physics and, with their increasingly sophisticated toolset, scientists began to investigate phenomena at galactic and subatomic scales.  Space-based telescopes, electron microscopes, mass spectrometers, atomic clocks, supercolliders, computers, robots that land on distant moons and planets, and other tools allowed for explorations and experiments that were not possible in earlier times.

Intense organized suppression has existed in situations in which scientific and technological advances can threaten economic empires, but many areas of science are not seen as threatening, and reconstructing Earth’s distant past and the journey of life on Earth is one of those nonthreatening areas.  I have never heard of a classified fossil site or a Precambrian specialist being threatened or bought out in order to keep him/her silent.  There is more controversy with human remains and artifacts, but I am skeptical of popular works that argue for technologically advanced ancient civilizations and related notions.  Something closer to “pure science” can be practiced regarding those ancient events without the threat of repercussions or the enticements of riches and Nobel Prizes.  Much of this essay’s subject matter deals with areas in which the distortions of political-economic racketeering have been muted and the theories and tools have been relatively unrestricted.

Mass spectrometers measure the mass of atoms and molecules, and have become increasingly refined since they were first invented in the 19th century.  Today, samples that can only be seen with microscopes can be tested and measured down to a billionth of a gram.[3]  Elements have different numbers of protons and neutrons in the nuclei of their atoms, and each nuclear variation of an element is called an isotope.  Unstable isotopes decay into smaller elements (also called “daughter isotopes”).  Scientific investigations have determined that radioactive decay rates are quite stable and are primarily governed by the dynamics in a decaying atom.  The dates determined by radioactive dating have been correlated to other observed processes and the data has become increasingly robust over the years.[4]

The ability to weigh various isotopes, at increasing levels of precision, with mass spectrometers has provided a gold mine of data.  Scientists are continually inventing new methods and ways to use them, new questions are asked and answered, and some examples of methods and findings follow.

Carbon has two primary stable isotopes: carbon-12 and carbon-13Carbon-14 is the famous unstable isotope used for dating recently deceased life forms, but testing carbon’s stable isotopes has yielded invaluable information.  Carbon is the backbone of all of life’s structures, and life processes often have a preference for using carbon-12, which is lighter than carbon-13 and hence take less energy to manipulate.  Scientists have been able to test rocks in which the “fossils” are nothing more than smears and determine that those smears resulted from life processes, as there is more carbon-12 in the smear than carbon-13 than would be the case if life was not involved.[5]  This has also helped date the earliest life forms.  Life’s preference for lighter isotopes is evident for other key elements such as sulfur and nitrogen, and scientists regularly make use of that preference in their investigations.[6]

The hydrological cycle circulates water through Earth’s oceans, atmosphere, and land.  The energy of sunlight drives it, and that sunlight is primarily captured at the surface of water bodies and the oceans in particular.  The hydrological cycle’s patterns have changed over the eons as Earth’s surface has changed its continental configurations and temperature.  Today’s global weather system generally begins with sunlight hitting the atmosphere, and the equator’s air receives the most direct radiation and becomes warmest.  That air rises and cools, which reduces the water vapor that it can hold, so it falls as rain.  That is why tropical rainforests are near the equator.  The rising equatorial air creates high-pressure dry air that pushes toward the poles, and at about 30o latitude that air cools and sinks to the ground.  That dry air not only does not bring precipitation, but it absorbs moisture from the land it hits and forms the world’s great deserts.  That high pressure at the ground at 30o latitude pushes air back toward the tropics, and Earth’s rotation creates a distinctive bend in the northern and southern hemispheres that create trade winds that pick up moisture as they approach the equator.  The pole-ward sides of the mid-latitudes’ dry temperate regions also have low pressures and wet climates, and dry high-pressure zones exist at the poles.  As clouds pass over land, mountains force them upward and they lose their moisture in precipitation.[7]  As that water makes its way back to the oceans to start the cycle again, it provides the freshwater for all land-based ecosystems. Below is a diagram of those dynamics.  (Source: Wikimedia Commons)

A water molecule containing oxygen-16 (the most common oxygen isotope) will be lighter than a water molecule containing oxygen-18 (both are stable isotopes), so it takes less energy to evaporate an oxygen-16 water molecule than an oxygen-18 water molecule.  Also, after evaporation, oxygen-18 water will tend to fall back to Earth more quickly than oxygen-16 water will, because it is heavier.  As a consequence, air over Earth’s poles will be enriched in oxygen-16 – the colder Earth’s surface temperature, the less oxygen-18 will evaporate and be carried to the poles – and scientists have used this enrichment to reconstruct a record of ocean temperatures.  Also, the oxygen-isotope ratio in fossil shellfish (as their life processes prefer the lighter oxygen isotope) has been used to help determine ancient temperatures.  During an ice age, because proportionally more oxygen-16 is retained in ice sheets and does not flow back to the oceans, the ocean’s surface becomes enriched in oxygen-18 and that difference can be discerned in fossil shells.  Sediments are usually laid down in annual layers, and in some places, such as the Cariaco Basin off of Venezuela's coast, undisturbed sediments have been retrieved and analyzed, which has helped determine when ice sheets advanced and retreated during the present ice age.[8]

Mass spectrometers have been invaluable for assigning dates to various rocks and sedimentary layers, as radioactive isotopes and their daughter isotopes are tested, including uranium-lead, potassium-argon, carbon-14, and many other tests.[9]  Also, the ratios of elements in a sample can be determined, which can tell where it originated.  Many hypotheses and theories have arisen, fallen, and been called into question or modified by the data derived from those increasingly sophisticated methods, and a few examples should suffice to give an idea of what is being discovered.

The moon rocks retrieved by Apollo astronauts are still being tested, as new experiments and hypotheses are devised.  In 2012, a study was published which resulted from testing moon rocks for the titanium-50 and titanium-47 ratios (both are stable isotopes), and it has brought into question the hypothesis that the Moon was formed by a planetary collision more than four billion years ago.  The titanium ratio was so much like Earth’s that a collision with Earth forming the Moon has been questioned (as very little of the hypothesized colliding body became part of the Moon).  The collision hypothesis will probably survive, but it may be significantly different from today’s hypothesis.  Meteorites have been dated, as well as moon rocks, and their ages confirm Earth’s age that geologists have derived, and meteorite dates provide more evidence that our solar system probably developed from an accretion disk.

In the Western Hemisphere, the Anasazi and Mayan civilization collapses of around a thousand years ago, or the Mississippian civilization collapse of 500 years ago, have elicited a great deal of investigation.  From New Age ideas that the Anasazi and Mayan peoples “ascended” to the Eurocentric conceit that the Mississippian culture was European in origin, many speculations arose that have been disproven by the evidence.  It is now known that the Anasazi and Mayan culture collapses were influenced by epic droughts, but that was only the proximate cause.  The ultimate cause was that those civilizations were not energetically sustainable, and the unsustainable Mississippian culture was in decline long before Europeans invaded North America.  The Anasazi used logs to build their dwellings that today are famous ruins.  Scientists have used strontium ratios in the wood to determine where the logs came from, as well as dating the wood with tree-ring analysis and analyzing pack rat middens, and a sobering picture emerged.  The region was already arid, but agriculture and deforestation desertified the region around Chaco Canyon, which was the heart of Anasazi civilization.  When Anasazi civilization collapsed, at Chaco Canyon they were hauling in timber from mountains more than 70 kilometers away (the strontium ratios could trace each log from the particular mountain that it came from).  When the epic droughts delivered their final blows, Anasazi civilization collapsed into a morass of starvation, warfare, and cannibalism, and the forest has yet to begin to recover, nearly a millennium later.[10]

Another major advance happened in the late 20th century: the ability to analyze DNA.  DNA’s double-helical structure was discovered in 1953.  In 1973, the first amino acid sequence for a gene was determined.  In 2003, the entire human genome was sequencedSequencing the chimpanzee genome was accomplished in 2005, for orangutans in 2011, and for gorillas in 2012.  The comparisons of human and great ape DNA have yielded many insights, but the science of DNA analysis is still young.  What has yielded far more immediately relevant information has been studying human DNA.  The genetic bases of many diseases have been identified.  Hundreds of falsely convicted Americans have been released from prison, and nearly 20 from death row, due to DNA evidence's proving their innocence.  Human DNA testing has provided startling insights into humanity's past.  For instance, in Europe it appears that after the ice sheets receded 16,000 to 13,000 years ago, humans repopulated Europe, and for all the bloody history of Europe over the millennia since then, there have not really been mass population replacements in Europe by invasion, migration, genocide, and the like.  Europeans just endlessly fought each other and honed the talents that helped them conquer humanity.  There were some migrations of Fertile Crescent agriculturalists into Europe, but other than hunter-gatherers being displaced or absorbed by the more numerous agriculturalists, there do not appear to be many population replacements.  In 2010, a study suggested that male farmers from the Fertile Crescent founded the paternal line for most European men as they mated with the local women.  DNA testing has demonstrated that all of today’s humans are descended from a founder population of about five-to-ten thousand people, of whom a few hundred left Africa around 60-50 thousand years ago and conquered Earth.  The Neanderthal genome has been sequenced, as well as genomes of other extinct species, and for a brief, exuberant moment, some scientists thought that they could recover dinosaur DNA, Jurassic-Park-style.  Although dinosaur DNA is unrecoverable, organic dinosaur remains have been recovered, and even some proteins have been sequenced, which probably no scientist believed possible in the 1980s.[11]

Since 1992, scientists have discovered planets in other star systems by using a variety of methods that reflect the improving toolset that scientists can use, especially space-based telescopes.  Before those discoveries, there was controversy whether planets were rare phenomena, but scientists now admit that planets are typical members of star systems.  Extraterrestrial civilizations are probably visiting Earth, so planets hosting intelligent life may not be all that rare.

Those interrelated and often mutually reinforcing lines of evidence have made many scientific findings difficult to deny.  The ever-advancing scientific toolset, and the ingenuity of scientists developing and using them, and particularly the multidisciplinary approach that scientists and scholars are increasingly using, have made for radical changes in how we view the past.  Those radical changes will not end any time soon, and what follows will certainly be modified by new discoveries and interpretations, but I have tried to stay largely within the prevailing findings, hypotheses, and theories, while also poking into the fringes and leading edges somewhat.  Any mistakes in fact or interpretation in what follows are mine.


The Orthodox Framework and its Limitations

  Chapter summary:

In the West, the conception of the physical universe and humanity’s ability to manipulate it has remarkably changed in the past few thousand years, which has been a tiny fraction of humanity‘s journey on Earth.  Thousands of years ago, the Greek philosophers Democritus and Leucippus argued that the universe was comprised of atoms and the void, and Pythagoras taught that Earth orbited the Sun.[12]  Greeks also invented the watermill during the same era.  Hundreds of years later, a Greek mathematician and engineer, Heron of Alexandria, described the first steam engine and windmill and is typically credited as the inventor, but the actual inventors are lost to history.  Western science and technology did not significantly advance for the next millennium, however, until ancient Greek writings were reintroduced to the West via captured Islamic libraries.  The reintroduction of Greek teachings, and the pursuit of their energy technologies, ultimately led to the Industrial and Scientific revolutions.

Scientific practice is ideally a process of theory and experimentation that can lead to new theories.  There are three general aspects of today's scientific process, and it developed from a method proposed by John Hershel, which Charles Darwin used to formulate his theory of evolution.[13]  First, facts are adduced.  Facts are phenomena that everybody can agree on, ideally produced under controlled experimental conditions that can be reproduced by other experimenters.  Hypotheses are then proposed to account for the facts by using inductive (also called abductive) logic.  The hypotheses are usually concerned with how the universe works, whether it is star formation or evolution.  If a hypothesis survives the fact-gathering process – often by predicting facts that later experiments verify – then the hypothesis may graduate to the status of a theory.[14]  Scientific theories ideally can be falsified, which means that they can be proven erroneous.  The principle of hypothesis and falsification is primarily what distinguishes science from other modes of inquiry.

The relegation of hypotheses and theories to oblivion, without getting a fair hearing, as the pioneer dies in obscurity or is martyred, only to be vindicated many years later, has been a typical dynamic.  The man who first explained the dynamics behind the aurora borealis, Kristian Birkeland, died in obscurity in 1917, with his work attacked and dismissed.  It was not until Hannes Alfvén won the 1970 Nobel Prize that Birkeland’s work was finally vindicated.  Endosymbiotic theory, the widely accepted theory of how mitochondria, chloroplasts, and other organelles came to be, was first proposed in 1905, quickly dismissed, and not revived until the late 1960s.

When a new hypothesis appears, particularly a radical one, even if it is not a lone pioneer suffering martyrdom, the old guard usually attacks the new hypothesis and the situation turns into bitter feuds and armed camps all too often, such as the rise of the asteroid impact hypothesis regarding the dinosaurs’ demise.[15]  To a degree, those withering attacks are supposed to be how science works.  Doubt instead of faith is the guiding principle of science.[16]  Until a scientist’s bright idea is tested against the real world, it is just a bright idea.  Only hypotheses that have survived numerous attempts to falsify them graduate to becoming theories.  It can be argued that the “attack mode” that science has adopted toward new hypotheses has formed a structural bias so that all scientific pioneers will be attacked by their peers; it is simply the nature of the profession.  Only scientists who can weather the attacks from their peers will survive long enough to see their hypotheses receive a fair hearing.  That “shark tank” environment, particularly with lucrative prizes and tenured academic berths awaiting the winners, has arguably set back science’s progress considerably.

With what I know has been suppressed by private interests, often with governmental assistance, mainstream science is largely irrelevant regarding many important issues that could theoretically be within its purview.  Paradoxically, scientists can also fall for fashionable theories and get on bandwagons.[17]  Scientific practice is subject to human foibles, just as all human endeavors are.  There can be self-reinforcing bias in that the prevailing hypotheses can determine what facts are adduced, and potential facts thus escape inquiry, particularly when entire lines of inquiry are forbidden by organized suppression and the excesses of the national security state, as well as the indoctrination that scientists are subject to, as all people are.

Early in the 20th century, radical theories were proposed that remade scientists’ view of the universe.  Along with relativity and quantum theory, a primary pillar of today’s physics is the notion that everything in the universe is a form of energy, as summarized by Einstein’s equation: E = MC2.  Although the notion is still challenged in unorthodox corners, today’s prevailing hypothesis is that the universe came into existence in an instant called the Big Bang, stars are the energy centers in the observable universe, and nuclear fusion powers them.  When the Big Bang supposedly happened, there was no matter, but only energy.  Only when the universe had sufficiently expanded and cooled, less than a second after the Big Bang, did matter begin to appear, which is considered to be comprised of relatively low energy states.[18]  This essay hews fairly closely to today’s orthodox perspective for much of it.  However, there will be limitations, and some of them follow.

In the early days of science, it had a quasi-religious stature among its practitioners, and 19th-century scientists were prone to calling their hypotheses and theories “laws,” often appending their names to the “laws” as soon as possible, like imperialist “explorers” of the era appending European names to landmarks that they encountered during their conquests.  Brian O’Leary, one of two whom this essay is dedicated to the memory of, was a former astronaut, Ivy League professor, and political activist who explored the frontiers of science and stated that there are no “laws” of physics, only theories, but the term “law” is lodged deeply in the scientific lexicon, although by the 20th century scientists stopped calling new hypotheses and theories laws.  Modest scientists readily admit that the so-called “laws” of science are not the “laws” of the universe, but rather human ideas about what those laws might be, if there are any laws at all.  As Einstein and his colleagues readily admitted, the corpus of scientific fact and theory barely says anything at all about how the universe works.  Sometimes, paradigms shift and scientists see the universe with fresh eyes.  The ideals and realities of scientific practice are often at odds.  Ironically, when scientists reach virtual unanimity on a theory, it can be a sign that the theory is about to radically change, and many if not most scientists will go to their graves believing the theory that they were originally taught, no matter how much evidence weighs against it.

A key tension in mainstream science has long been the conflict between specialists and the generalists and multidisciplinarians.  The specialist’s motto might be, “The devil is in the details.”  Deductive reasoning is their specialty and reductionist principles often guide their investigations, in which breaking down phenomena into their most basic components is the goal.  The generalist’s motto might be, “I seem to see a pattern here.”  Generalists often use inductive reasoning and tend to think holistically, usually in terms of systems, and they recognize emergent properties arising from higher levels of systems complexity, which can be something new and not necessarily inherent in lower levels of complexity or predictable by analyzing those lower levels.  New hypotheses often come from generalists and their inductive reasoning, and the best of them usually have some flash of insight that leads them to their breakthroughs, which is called intuition or the Creative Moment.  I found that it is a close cousin to psychic ability, if not the same thing. 

Specialists are often those on the ground, getting their hands dirty and doing the detailed work that forms the bedrock of scientific practice.  Without their efforts, science as we know it would not exist.  However, mainstream science has long suffered from the tunnel vision that overspecialization encourages, and R. Buckminster Fuller thought that the epidemic overspecialization and naïveté of mainstream scientists in his time was a ruling class tactic to keep scientists controlled and unable to see the forest for the trees.[19]  That has been slowly changing in my lifetime, so that collaborative efforts are drawing from multiple disciplines and achieving synthetic views that were not feasible in earlier times, and patterns are newly recognized that were invisible in a scientific world filled with isolated specialists.  Many paradigmatic breakthroughs in science and technology were made by non-professionals, specialists working outside of their field of professional expertise, and generalists traversing disciplinary boundaries.[20]  Scientific training today attempts to prevent that overspecialized tunnel vision, and today’s practicing scientists ideally get deep into the details and then pull back and try to see context, connections, and patterns.  A comprehensivist tries to understand the details well enough to refrain from making unwarranted generalizations while also striving for that big-picture awareness.  There are also top-down and bottom-up ways to approach analyses; each can provide critical insight, and scientists and other analysts often try to use both.[21]

Another key set of tensions are those between theorists, empiricists, and inventors.  Theorists attempt to account for scientific data and ideally predict data yet to be adduced, which tests the validity of their hypotheses and theories.  Empiricists often produce that scientific data.  Inventors create new technologies and techniques.  Albert Einstein is the quintessential example of a theorist, who never performed experiments relating to his theories but accounted for experimental results and predicted them.  Michelson and Morley, who performed the experiment that produced results that various scientists wrestled with for a generation before Einstein proposed his special theory of relativity, never suspected that their experiment would lead to the theories that it did.  The most important experiments in science’s history were often those producing unexpected results and were usually called failures.  Einstein’s general theory of relativity had no experimental evidence when he proposed it (it explained Mercury’s orbit, but that was the only evidence for it when the theory was proposed), but it has been confirmed numerous times since then.  Einstein expected that his theories would eventually be falsified by experimental evidence, but that the best parts of his theories would survive in the new theories. 

The Wright brothers were typical inventors.  Before they flew, theorists said that man-powered flight was “impossible,” mainstream science ignored or ridiculed them for five years after they first flew, and the Smithsonian Institution tried to deny the Wright brothers their rightful precedence for generations.  The theorists were spectacularly wrong, the empiricists had abandoned their primary principle of observation, and it was up to inventors to finally open their eyes and minds, years after the public witnessed the new technologies working.  Brian O’Leary told me that the scientific establishment’s collective blindness and denial is worse in the early 21st century than in the Wright brothers’ time.

I have encountered numerous technologies that theorists denounce as “impossible,” empiricists ignore as if they did not exist, while the inventors are not exactly sure why their inventions work, but only know that they do.  Such inventions often threaten to upend the very foundations of scientific disciplines, which is primarily why they have been ignored as they have, if they are not actively suppressed.[22]  When their breakthroughs threatened the dominance of the industrial/professional rackets, then the risks could become deadly

The findings of mainstream science can be particularly persuasive when lines of evidence from numerous disciplines independently converge, which has become increasingly common as scientific investigations have become more interdisciplinary.  DNA testing is clearly showing descent relationships and ghost ancestors are being reconstructed via genetic testing.  Numerous dating methods are used today, and more are regularly invented.  Typically, a new technique will emerge from obscurity, often pioneered by a lonely scientist.  For instance, dendrochronology, the reading of tree rings, was developed as a dating science by the dogged efforts of an astronomer who labored in obscurity for many years.  He was a fortunate pioneer; when he died after nearly 70 years of effort, he had lived to see dendrochronology become a widely accepted dating method.  Eventually, the new method can break past the inertia and active suppression, and sometimes even if the breakthrough threatens powerful interests.  Then the newly accepted method can be seen as a panacea for all manner of seemingly insoluble problems, in the euphoric, bandwagon phase.  Yesterday’s heresy can become tomorrow's dogma.  Then early victories may not seem as triumphant as previously hailed, and a “morning after” period of sobering up arrives.  The history of science is filled with fads that faded to oblivion, sometimes quickly, while advances that survived the withering attacks are eventually seen in a more mature light, in which its utility is acknowledged as well as its limitations.  DNA and molecular clock analyses have largely passed through those phases in recent years.  In the 1980s, the idea of room-temperature superconductors had its brief, frenzied day in the sun when high-temperature superconductors were discovered.  Cold fusion had a similar trajectory, although the effect seems real and MIT manipulated their data to try to make the effect vanish.  A scientist who spoke out against MIT’s apparent fraud was murdered years later at the same time as a series of events that I was close to that may have been related.  After the bolide impact hypothesis broke through a taboo that lasted for more than a century, some scientists tried explaining all mass extinctions with bolide impacts.  Today, the bolide event that ended the dinosaurs’ reign is the only impact event widely accepted as responsible for a mass extinction, and even that event is still under siege by scientists who propose other dynamics for the dinosaurs’ extinction.

In the dating sciences, the tests have all had their issues and refinements.  The equipment has become more sophisticated, problems have been resolved, and precision has been enhanced.  While there are continuing controversies, dating techniques have advanced just like many other processes over the history of science and technology.  In 2014, dates determined for fossils and artifacts are generally only accepted with confidence when several different samples are independently tested and by different kinds of tests, if possible.  If thermoluminescence, carbon-14, and other tests produce similar dates, as well as stratigraphic evidence, paleomagnetic evidence, current measurements of hotspot migration rates across tectonic plates, along with genetic and other evidence introduced in the past generation, those converging lines of evidence have produced an increasingly robust picture of not only what happened, but when.

In the 1990s, I found the dating issue enthralling and saw it assailed by fringe theorists and by catastrophists in particular.  A couple of decades later, I reached the understanding that, like all sciences, dating has its limitations and the enthusiasm for a new technique can become a little too exuberant, but dating techniques and technologies have greatly improved in my lifetime.  Dating the Cambrian Period’s beginning to 541 million years ago, and using 100,000-year increments to place the dates, may seem a conceit, thinking that scientists can place that event with that precision, but over the years my doubts have diminished.  When moon rocks and meteorites can be tested, and the findings support not only Earth’s age previously determined by myriad methods, but also support the prevailing theories for the solar system’s and Moon’s formation, call me impressed.  Controversies will persist over various finds and methods used, and scientific fraud certainly occurs, but taken as a whole, those converging lines of independently tested evidence make it increasingly unlikely that the entire enterprise is a mass farce, delusion, or even a conspiracy, as many from the fringes continue to argue.  There is still a Flat Earth Society, and it is not a parody.  I have looked into fringe claims for many years and few of them have proven valid; even if many were, their potential importance to the human journey was often minor to trifling.  As the story that this essay tells comes closer to today’s humanity, orthodox controversies become more heated and fringe claims proliferate.

Quite often, the pioneers of science and technology receive no credit at all, not even posthumous vindication, as others steal their work and become rich and famous.  But if private and governmental interests do not suppress the data and theory, as is regularly achieved regarding alternative energy and other disruptive technologies, usually the data will eventually win.  But the data does not always win.  The expedient but misleading tale of Louis Pasteur’s triumph in explaining the origins of life, which microbiology students are still taught in college, is an example of the phenomenon of false credit attributed to a figure who may have also marched the discipline off in the wrong direction, from which it has yet to recover.  Another problem has been fabricated “discoveries” that become uncritically accepted by the mainstream, and that ideal “skepticism” of science completely disappeared, as powerful interests promote industrial waste as “medicine,” for instance, as was done with fluoridation.  It was also done with tobacco smoking, and medical authorities even promoted asbestos cigarette filters, in one of many “believe it or not” episodes in the history of science and medicine.  Mercury was sold as “medicine” until my lifetime, and is still found in vaccines, for which the very theoretical and empirical foundation seems pretty shaky.  Lead received a similar clean bill of health by industrially funded laboratories as the conflicts of interest were surreal, and the public was completely unaware of who was really managing such public health issues and why.  Similar situations exist today.

Perhaps the most significant challenge to mainstream science is the fact that numerous advanced technologies already exist on Earth, including free energy and antigravity technologies, but they are actively kept from public awareness and use.  They and other exotic technologies developed in the above-top-secret world operate on principles that make the physics textbooks resemble cave drawings.

Although some scientists have challenged Carnot’s Second Law of Thermodynamics and even the First, tapping the zero-point field, as some fellow travelers did, does not violate the “laws of physics” at all; it is merely harnessing an energy source that mainstream science does not recognize, even though its greatest minds have posited its existence.  For that reason, my astronaut colleague called such energy “New Energy,” and we co-founded an organization in 2003 with that name.  However, when my partner and I began to mount a business in 1987 around “New Energy,” he called it “free electricity” in ads, and we used the term “free energy” before we knew anything about the field or our professional ancestors.  I used the term “free energy” for many years before I heard the term “new energy,” and I will probably always use “free energy” (“FE”), largely because I grew up with it and it is still commonly used in the field.  My partner's shared savings programs were also the closest thing to truly “free” energy that has ever been on the world market.

Thousands of scientists and inventors have independently pursued FE technologies, but all such efforts, if they had promise or garnered any success, have been suppressed by a clandestine and well-funded effort of global magnitude.  However, this essay will lay most of that aside until near the essay’s end, other than to note that one of Einstein’s protégés, David Bohm, theorized that space is anything but empty.[23]  Einstein also stated that his general theory of relativity resurrected the idea of an ether that his special theory of relativity supposedly rendered obsolete.[24]  According to Bohm’s computation, the energy existing in “empty space” is unimaginably vast, as one cubic centimeter of it contains more energy than is contained in all the mass of the known universe.  One of Fuller’s pupils not only subscribed to the notion that “empty” space is not empty, but he helped build technologies that harnessed that energy source, and his life’s story, like my former partner’s, is hard to believe, but has impressive evidence for its validity.  According to him, the recently discovered Higgs boson is part of an effort to “rebrand” what has been called the zero-point field and other names over the years, which is the field that FE technology often harnesses.[25]  I have encountered dozens of instances of scientists with theories that challenge the Standard Model of particle physics, and their primary upshot is a “new” energy source, which is often called zero-point energy.[26]  But, black projects[27] and “leading edge” theory aside (theory that is far older than I am), technologies have been publicly available for many years whose operation upends some of science’s oldest theories.[28]  “White science” (AKA "Establishment" or "mainstream" science) has great defects, especially when its pursuit conflicts with deeply entrenched economic and political interests.

Although the greatest physicists were arguably mystical in their orientation, they rarely explored the nature of consciousness in the way that modern human potential efforts have.  When I was 16 years old, it was demonstrated to me, very dramatically, that everybody inherently possesses psychic abilities, which falsifies today's materialistic theories of consciousness.  Millions of people had similar experiences during the last decades of the 20th century when performing such exercises.  They are usually life-changing events and available to nearly anybody who devotes the time to experiencing them, but a politically active arm of mainstream science, known as organized “skepticism,” has waged a holy war against such evidence for longer than I have been alive.  The scientific establishment’s warriors often denigrate such phenomena as “pseudoscience,” which is a term that they greatly abuse when attacking ideas and phenomena outside of their ability to investigate or that conflict with their materialistic assumptions.  Far too often, when scientists discuss materialism, they compare it to organized religion, particularly its fundamentalist strains, as if those are the only two alternatives, when they are on opposite ends of a spectrum in one way and two sides of the same coin in others.  Ironically, organized skepticism is largely comprised of anti-scientists who try to deny that such abilities of consciousness are even worthy of scientific investigation.  That they defend materialism with flawed logic, dishonesty, and dirty tricks is one thing, but all too often, as I performed the studies that led to this essay, I saw mainstream scientists trust the “skeptics” for their pronouncements on the validity of “paranormal” phenomena.  That would be like asking a Wall Street executive in the 1950s what his opinion of communism was.

I was also regularly dismayed by orthodox scientific and academic works that dealt with the human brain, consciousness, human nature, UFOs, FE technology, and the like, in which the authors accepted declassified government documents at face value (as in not wondering what else remained classified, for starters) or looked no further than 19th-century investigations.[29]  Direct personal experience is far more valuable than all of the experimental evidence that can be amassed; there is no substitute for it, as that is where knowledge comes from.  Armchair scientists who accept the skeptics' word for it have taken the easy way out and rely on highly unreliable "investigators" to tell them about the nature of reality.  They consequently do not have informed opinions, or perhaps more accurately, they have disinformed opinions.  The holy warriors’ efforts aside, the scientific data is impressive regarding what has been called “psi” and other terms, which clearly demonstrated abilities of consciousness that are still denied and neglected by mainstream science.[30]  Brian O'Leary advocated scientific testing of paranormal phenomena, but he was a voice in the wilderness.

Not all mainstream scientists relegate consciousness to a mere byproduct of chemistry.  John von Neumann’s interpretation of quantum mechanics is that consciousness is required for the wavefunctions that describe fields at the subatomic level to collapse into observable particles.[31]  He was not the only scientist whose theories required consciousness to exist in order for the physical universe to become observable.  The greatest physicists knew that materialism was a doctrine built on unprovable assumptions, which amounts to a faith.[32]  It can be quite revealing when mainstream scientists deal with phenomena that challenge the tenets of their faith.  Forthcoming quantum physicists regard the controversy over the implications of quantum theory as “our skeleton in the closet.”[33]  To the end of his life, Einstein was very uncomfortable with the implications of quantum theory, and his disquiet was ahead of its time.[34]  French physicist Alain Aspect performed a state-of-the-art test of Bell’s inequality, which helped establish the reality of quantum entanglement, which Einstein derided to his grave as “spooky action at a distance.”  When they met and Aspect proposed the experiment, John Bell’s first question was, “Do you have tenure?”[35]  That paradox at the heart of quantum physics was avoided by the Copenhagen interpretation, which focuses on getting the right answers for quantum predictions and avoids the implications for reality that the quantum enigma presents.[36]  Einstein and Schrödinger were not satisfied with a framework that made accurate predictions but avoided grappling with what was really happening.

White science still has almost nothing to say about the nature of consciousness.  However, Black Science (covert, largely privatized, and the same province where that advanced technology is sequestered) is somewhat familiar with the nature of consciousness and considers it to be far more than a byproduct of chemistry.  The assumption that the entire universe is a manifestation of consciousness is not only unassailable by White Science, but is probably a foundational assumption of Black Science and mystics.

The battle between materialists and religious orders over the years, in which materialist evolutionists grapple with creationists and intelligent design proponents, seems to be a feud between two fundamentalist camps.  Nowhere in such battles are the abilities or wisdom of accomplished mystics found.  The nature and role of consciousness, both in this dimension and beyond it, are likely far too subtle to be profitably engaged by the level of debate that predominates today.  Scientists such as Einstein were awestruck by the evident intelligence behind the universe’s design, but that did not mean that they believed in a God with a flowing beard.  As this essay will explore later, those issues are not merely fodder for idle philosophical pursuit, but at their root lies the crux of the current conundrum that humanity finds itself in, as we race toward our self-destruction.

White Science does not really know what energy is; it can only describe its measurable effects.[37]  At its root, there are two primary components of our universe: energy and consciousness.  Our universe may have begun as pure energy (and even if it did not, all matter appears to be comprised of energy), and consciousness may be required for our universe to exist at all, which may be part of the quantum paradox.  Energy and matter may be manifestations of consciousness, and large brains could be simply more refined “transducers” for more complex consciousness to manifest in physical reality.  In summary, everything physical is made of energy and our consciousness is all that we know, but the greatest physicists admitted that the nature of consciousness is not something that today’s science is equipped to study.  There is evidence that evolution is not purely the province of chance mutations, but that organisms can affect their evolution at the genetic level.[38]

The greatest scientists readily admitted that the theories and data of physics, that hardest of the hard sciences, drew highly limited descriptions of reality, and those scientists were usually, to one extent or another, mystics.  If textbook science falls far short of explaining reality, what can be said within its framework that is useful?  Plenty.  Our industrialized world is based on textbook science and feats such as putting men on the Moon were performed within the parameters of textbook science.  With the waning of overspecialization and overreliance on reductionism in the last decades of the 20th century, multidisciplinary works have proliferated and will tend to dominate the references for this essay.  I have found them not only very helpful for my own understanding, but they are appropriate references for a generalist essay.  I have also avoided scientific terminology when feasible.  For example, I use “seafloor” instead of “benthic,” and if a non-specialized term will suffice for a scientific concept, I will often use it.

The mainstream theory is that matter consists of elementary particles (which are all forms of energy), and their interaction with the Higgs field is responsible for all mass.  Almost all mass in the known universe consists of protons in hydrogen atoms, and those protons are in turn comprised of quarks, and electrons and neutrinos are the other first generation fundamental particles.  Protons have a positive electric charge, electrons a negative charge, and neutrinos no net charge.  The simplest atom consists of one proton in the nucleus and one electron in “orbit” around it, which is the most common hydrogen atom.  Today, mainstream science recognizes four forces in the universe: gravity, electromagnetism, and the strong and weak forces in an atom’s nucleus.  Gravity attracts matter to matter, and is thought to be responsible for the formation of stars, planets, and galaxies.  But the universe seems to be built from processes, not objects.

The Standard Model of particle physics is complex, but the preceding presentation is largely adequate for this essay’s purpose, while it can be helpful to be aware that the physics behind FE and antigravity technologies will probably render the Standard Model obsolete.  If FE, antigravity, and related technologies finally come in from the shadows, the elusive Unified Field may come with them, and the Unified Field might well be consciousness, which will help unite the scientist and the mystic, and that field may be divine in nature.  But that understanding is not necessary to relate the story that White Science tells today of how Earth developed from its initial state to today’s, when complex life is under siege by an ape that quickly spread across the planet like a cancer once it achieved the requisite intelligence, social organization, and technological prowess.

With the above limitations acknowledged, this essay will explore the earthly journeys of life and humanity, and energy’s role in them. 


Energy and Chemistry

  Chapter summary:

This chapter presents several energy and chemistry concepts essential to this essay.  Even though scientists do not really know what energy is (they do not know what light or gravity are, either), energy is perhaps best seen as motion, whether it is a photon flying through space, the "orbit" of an electron around an atom's nucleus or of Earth around the Sun, an object falling to Earth, a river flowing toward the ocean, air moving through Earth's atmosphere, rising and falling tides, and blood moving through a heart.

In their dance around an atom’s nucleus, electrons exist in “shells.”  The most stable electron configuration exists when the electrons fill the shells and each electron is paired with another, and each electron spins in the opposite direction of its partner.  The classical view of an electron had an electron orbiting the nucleus much in the same way that Earth orbits the Sun, but quantum theory presents a different picture, in which an electron is a wave that only appears to be a particle when it is observed.  Even then, a hydrogen electron’s orbit as presented by quantum theory does not look much different from the classical image, and the classical view largely suffices for this essay in presenting the energetic aspects of the electrons’ properties.

When one electron shell is filled, electrons begin to fill shells farther from the nucleus.  For the simplest atoms it works that way, but for larger atoms, particularly those of metallic elements, electrons fill shells in more complex fashion and electrons begin to fill subshells not necessarily in the shell closest to the nucleus.  When an electron is unpaired or in an unfilled shell, it can be a valence electron, which can form bonds with other atoms.  In most circumstances, only unpaired electrons form bonds with other atoms.  Electron bonds between atoms provide the basis for chemistry and life on Earth.

For that simplest element, hydrogen, its lone electron has an affinity to pair up with another electron, and that smallest shell contains two electrons.  Hydrogen is never found in its monoatomic state in nature, but is bonded to other elements, as that lone electron finds another one to pair with, which also fills that simplest shell.  In its pure state in nature, hydrogen is found paired with itself and forms a diatomic molecule.  In chemistry notation, it is presented as H2.  The most common hydrogen combination with another element on Earth is with oxygen (“O” in chemistry notation), which forms water and is presented as H2O.  Oxygen has two unpaired electrons in its electron shell (its outer shell has eight positions for electrons, with six of them filled), and oxygen’s electrons pair with electrons in other atoms with a “hunger” that is only surpassed by fluorine, which is the most reactive known element.  The “hungriest” atoms can completely strip an electron from nearby atoms and form ions, whereby the resulting atoms have imbalances between their electrons and protons, and thus possess net electric charges.  An atom that loses an electron in a chemical reaction is called “oxidized,” while the atom that gains one is called “reduced.”  When electrons are transferred or shared, those hungriest atoms will cause the greatest amounts of energy to be released in the reactions.  Fluorine is so reactive that if it were sprayed on water, the water would burn.

The element with two protons in its nucleus is helium (the number of protons determines what element the atom is), and its electrons are paired and its shell is filled.  Consequently, helium does not want to share its electrons with anything.  Helium is the most non-reactive element known.  It has never bonded with any other element, even fluorine.  In the periodic table of the elements, helium is in the family known as noble gases (formerly named “inert”), because they resist reacting with other atoms.  Their electron shells are completely filled. 

An electron’s distance from the nucleus can vary.  It is not a smooth variation of distance, but only certain distances are possible.  When an electron changes its distance, it jumps in a process known as quantum leaping.  That quantum leaping reflects how electrons gain or release energy.  When light hits an atom, if it is absorbed by an electron, the photon gives the electron the energy to move to an orbit farther away.  When an electron emits light, that lost photon removes energy and the electron falls to a lower orbit.  The potential energy in the electron as it orbits the nucleus and the potential energy in a rock that I hold above the ground are similar, as the diagram below demonstrates. 

Below is a diagram of a hydrogen atom as its electron orbits farther from the nucleus when it absorbs energy. 

As the diagram depicts, the atom gets larger.  When an electron moves into an orbit farther from the nucleus, the atom will vibrate more, like the way a car’s engine will vibrate more when it runs faster.  Lateral movement (also called translational motion) is called temperature.  While finding an accurate definition of temperature can be a frustrating experience, temperature is a measure of the kinetic energy (the energy of motion) in matter.  As with the behavior of photons, at the atomic level the concept of temperature can break down, and classical behaviors emerge as groups of atoms lose their quantum properties.[39]  When one atom collides with another, there is a transfer of energy, as there is in any collision.  The transferred energy can be stored by the electrons leaping into higher orbits.  They can in turn release that energy in the form of photons and return to lower orbits.

The increased movement of heated atoms is why substances expand in volume.  The more motion, the higher the temperature, and just as an engine will fly apart when the RPMs go too high, when an atom vibrates too fast, an electron can leave the atom entirely and the atom becomes an ion.  As substances become hotter, the electrons will be in higher orbits, and will fall farther when giving off photonic energy, so the photons have more energy (shorter wavelengths).  Get a substance hot enough and it will emit photons that we can see (visible light).  Those first visible photons will be on the lower end of the spectrum of light that we can see with our eyes, and will be red.  Get the substance hotter and the light can turn white, which means that we are seeing the full visible spectrum of light.  Nealry half of the Sun’s energy output is in the form of visible light.  Get matter hot enough and it becomes plasma, as electrons float in a soup with nuclei.  Those electrons are too energetic to be captured by nuclei and placed into shells. 

When two atoms come close to each other, if the potential energy of their combined state is less than their potential energy when they are separate, the atoms will tend to react.  But the reaction only happens when the electron shells come into an alignment so that the reaction can happen.  It is an issue of alignment and the atoms’ velocity.  If the shells do not meet in the proper alignment and velocity, the reaction will not happen and the atoms will bounce away from each other.  The faster and more often the atoms collide, the likelier they are to react and reach that lower energy state.  Chemical (electron shell) reactions need to reach their activation energy to occur, and this is measured in temperature.  The activation energy for hydrogen and oxygen to react and form water is about 560 degrees Celsius (560o C).  Nuclear reactions work in similar fashion, but for nuclear fusion in the Sun’s core, at 16 million degrees Celsius, at a pressure 340 billion times greater than Earth’s atmosphere at sea level, in 10 billion years at one trillion collisions per second, a proton has a 50% chance of fusing with another proton.[40]  Nuclear fusion is thus far rarer than electron bonding, and far less energy is released when atoms bond via electrons.  The fusion of a helium nucleus releases more than a million times the energy that it takes to ionize a hydrogen atom.  As will be discussed later, some reactions have a cumulative result of absorbing energy, while others release it.  The first can be seen as an investment of energy, while the second can be seen as consuming it.  Organisms and civilizations have always faced the investment/consumption decision. 

Below is a diagram of two hydrogen atoms before and after reaction, as they bond to form H2

Elements with their electron shells mostly, but not completely, filled are, in order of electronegativity: fluorine, oxygen, chlorine, and nitrogen.  In that upper right corner of the periodic table, of largely filled electron shells, phosphorus and sulfur also reside.  Carbon and hydrogen have their valence shells half filled.  With the exception of fluorine, those elements listed above provide virtually all of the human body’s atoms.  The body also contains metals, particularly sodium, magnesium, calcium, and iron, which “donate” electrons and make key chemical reactions possible.  Fluorine forms the smallest negatively charged ions known to science and wrecks organic molecules for reasons discussed later in this essay.  Organisms do not use fluorine, except for some plants that use it as a poison

When atoms combine through shared electrons (called “covalent” bonds), the electrons are not always shared equally.  The classic example of this is the water molecule.  Oxygen “hogs” the electrons that the hydrogen atoms share with it.  Because those electrons spend more time in the oxygen atom‘s electron shell than they do in the hydrogen atoms’ electron shells, the oxygen atom in a water molecule will get a negative charge to it, and the hydrogen atoms will get positive charges.  The charges will not be as strong as if they were ionized atoms, but those charges “polarize” the molecule.  In a body of water, oxygen atoms will attract hydrogen atoms of neighboring molecules, and a relatively weak attraction known as a hydrogen bond forms.  Below is a picture of hydrogen bonds in water.  (Source: Wikimedia Commons)

Those hydrogen bonds make water the miraculous substance that it is.  The unusual surface tension of water is due to hydrogen bonding.  Water has a very high boiling point for its molecular weight (compare the boiling points of water and carbon dioxide, for instance) because of that hydrogen bonding.  Water’s unique properties made it the essential medium for biochemical reactions; the human body is mostly made of water. 

Those energy and chemistry concepts should make this essay easier to digest. 


Timelines of Energy, Geology, and Early Life

Timeline of Significant Energy Events in Earth's and Life's History

Abbreviated Geologic Time Scale

Early Earth Timeline before the Eon of Complex Life

Significant Energy Events in Earth's and Life's History as of 2014

Energy Event



Nuclear fusion begins in the Sun

c. 4.6 billion years ago (“bya”)

Provides the power for all of Earth's geophysical, geochemical, and ecological systems, with the only exception being radioactivity within Earth.

Life on Earth begins

c. 3.8 – 3.5 bya

Organisms begin to capture chemical energy.

Enzymes appear

c. 3.8 – 3.5 bya

Enzymes accelerate chemical reactions by millions of times, making all but the simplest life (pre-LUCA) possible.

Photosynthetic organisms first appear

c. 3.5 – 3.4 bya

Organisms begin to directly capture photonic solar energy.

Oxygenic photosynthesis first appears

c. 3.5 – 2.8 bya

Oxygen is generated, which complex life will later use, which makes non-aquatic life possible and also preserves the global ocean.

Aerobic respiration first appears

c. 2.4 – 1.8 bya

Allows for more energetic respiration than anaerobic respiration.

Complex cells first appear (eukaryotic)

c. 2.1 – 1.6 bya

Allows for larger cells and far greater energy generation capacity – pound for pound, a complex cell uses energy 100,000 times as fast as the Sun creates it.

First chloroplast created

c. 1.6 – 0.6 bya

Allows for direct energy capture of complex life, and led to plants.

Dramatic climb in atmospheric oxygen, to eventually achieve modern levels, begins

c. 850-420 million years ago ("mya") 

Creates conditions for complex life to appear and dominate Earth's ecosystems.

First animal appears

c. 760 to 665 mya

First large-scale energy users.

Deep oceans oxygenated

c. 580 - 560 mya

Creates conditions for complex life to appear, first in the global ocean.

Cambrian Explosion begins

c. 541 mya

First complex ecosystems appear.

Teeth appear

c. 540-530 mya

Concentrated application of muscle energy.

Reef ecosystems appear

c. 513 mya

The most complex aquatic ecosystem appears.

Land plants appear

c. 470 mya

Energetic basis for land-based ecosystems appears.

Land animals appear

c. 430-420 mya

Ability to create non-aquatic ecosystems.

Jaws appear

c. 420 mya

Greatest energy manipulation enhancement among vertebrates until the rise of humans.

Vascular plants appear

c. 410 mya

Ability to create vertical ecosystems.

Trees appear

c. 385 mya

Largest organisms ever, and greatest energy storage and delivery to any biome, and they become the basis for coal.

Fish migrate to land

c. 375 mya

Precursor to dominant land animals.

Seed-reproducing plants appear

c. 375 mya

Ability to colonize dry lands.

Amniotes appear

c. 320-310 mya

Ability to survive in dry lands.

Lignin-digesting organism appears

c. 290 mya

Ability to make tree-stored energy available to ecosystems.

Dinosaurs appear

c. 243 mya

Among the first terrestrial animals with upright posture, enabling great aerobic capacity and domination of terrestrial environments.

Tools first used

c. 400-200 mya?

Confers energy advantage to tool user.

Flowering plants appear

c. 160 mya

Great energy innovation to reduce reproductive costs, and animals are the beneficiaries, as they act as reproductive enzymes in greatest symbiosis of plant and animal life, which allows flowering plants to dominate terrestrial ecosystems.

The control of fire

c. 2.0-1.0 mya

Allows protohumans to leave trees, become Earth's dominant predator, alter ecosystems, and cooked food helped spur dramatic biological changes, including encephalization in human line.

Projectile weapons invented

c. 400 thousand years ago ("kya")

Changes the terms of engagement with prey and reduces hunting risk of large animals and increases effectiveness.

Boat invented

c. 60 kya

Allows for first low-energy transportation, and ability to travel to unpopulated continents.

Widespread domestication of plants and animals

c. 10 kya

Provides the local and stable energy supply that allowed for sedentary human populations and civilization.

First metal smelted

c. 7 kya

Allows for tools highly improved over stone, for greater energy effectiveness of human activities.

Plow invented

c. 7 kya

Allows for greatly increased energy yields from agriculture.

First sailboat invented

c. 6 kya

First technology to take advantage of non-biological energy.

Wheel invented

c. 5.5 kya

Reduces energy use for ground-based transportation.

Coal first burned

c. 5-4 kya

First use of non-biomass for chemical energy.

Iron first smelted

c. 4.5 kya

Allows for vastly improved tools.

Coal used to smelt metal

c. 3.0 kya

First use of non-biomass to smelt metal

Watermill invented

c. 2.2 kya

First time the energy of the hydrological cycle is harnessed for use on land.

Windmill invented

c. 2.0 kya

First time wind is harnessed for use on land.

Steam engine invented

c. 2.0 kya

First time the motive power of fire is harnessed.

Europe learns to sail across the world's oceans

The years 1420 – 1522, common era

Turns global ocean into low-energy transportation lane and allows Europe to conquer the world.

First use of coal for smelting metal in England


First act of Industrial Revolution.

First commercial steam engine built


First time the motive power of fire is harnessed to perform work.

First practical use of electricity

c. 1805

New way to use energy would revolutionize civilization.

First commercial oil well drilled


The most coveted fuel of the Industrial Revolution is first used.

Incandescent lighting first commercialized

c. 1880

First commercial use of electricity. 

Alternating current technology prevails over direct current


The major technical hurdle to electrifying civilization is overcome.

First attempt to create "free energy" technology is abandoned due to lack of funding


This event inaugurates the era of organized suppression of free energy technologies.

First man-powered flight, and establishment of first company to mass-produce automobiles


Major transportation developments begin to be powered by petroleum.

Albert Einstein published his special theory of relativity and equation for converting mass to energy


Forms the framework for 20th century physics, including the energy that can be liberated from an atom's nucleus.

British Navy converts from coal to oil


Provides incentive for oil-poor United Kingdom to dominate the oil-rich Middle East.

Oil-rich Ottoman Empire dismembered by industrial powers, establishing imperial and neocolonial rule in Middle East


The West invades the Middle East and has yet to leave, lured by the oil.

USA harnesses the atom's power, and first use is vaporizing two cities, and the greatest period of economic prosperity in history begins


The nuclear age is born, as well as the Golden Age of American capitalism.

The USA's national security state is born, Roswell incident


By this time, free energy technology has probably been either developed or acquired.

Electrogravitic research goes black


This is the final technology, along with free energy technology, to make humanity a universally prosperous and space-faring species.

The USA reaches Peak Oil


The decline in the American standard of living begins.

Former astronaut nearly dies immediately after rejecting the American military's UFO research "offer"


The incident is one of many that demonstrate that the UFO issue is very real, but happened to somebody close to me.

A close personal friend is shown free energy and antigravity technologies, among others, and another close friend had free energy technology demonstrated


Those incidents are two of many that demonstrate that the free energy suppression issue is very real, but were witnessed by people close to me.

The world reaches Peak Oil


The beginning of the end of industrial civilization.

The Deepwater Horizon oil spill is history's largest


More evidence of how dangerous humanity's current energy production methods are.

The Fukushima nuclear event is probably history's greatest


More evidence of how dangerous humanity's current energy production methods are.

The table below presents an abbreviated geologic time scale, with times and events germane to this essay.  Please refer to a complete geologic time scale when this one seems inadequate.   

Abbreviated Geologic Time Scale





Global Map Reconstruction

Geophysical events

Life events



c. 4.56 to 4.0 bya

No land masses yet.

Earth, Moon, and oceans form.  Earth is bombarded with planetesimals.  Everything is hot.  Atmosphere is primarily comprised of carbon dioxide.

None yet.



4.0 to 2.5 bya

Too much uncertainty and too little evidence to confidently draw maps, but landmasses existed.

By the Archaean's end, the Sun is 80% as bright as today.  Earth cools to habitable temperature.  Continents begin forming and growing.  Atmosphere is mostly nitrogen, but oxygen begins to increase.  First known glaciation.

First life appears.  Photosynthesis begins.  All life is bacterial.  Oxygenic photosynthesis first appears.



c. 2.5 bya to 541 mya

Maps begin to be made with confidence at about 750 mya.

Earth’s two Snowball Earth events (1, 2) bookend the “boring billion years.”  Banded iron formations coincide with ice ages. 

Complex cell (eukaryote) first appears.  Aerobic respiration first appears.  First chloroplast appears.  Sexual reproduction first appears.  Grazing of photosynthetic organisms first appears.



c. 850 to 635 mya

Late Cryogenian Map

Supercontinent Rodinia breaks up.  Second Snowball Earth event.  Atmosphere oxygenated to near modern levels.  Final banded iron formations appear.

First animals appear.  First land plants may have appeared.



c. 635 to 541 mya

Mid-Ediacaran Map

Deep ocean is oxygenatedProto-Tethys Ocean appears.

Mass extinction of microscopic eukaryotes.  First large animals appear.




c. 541 to 485 mya

Late Cambrian Map

Continents primarily in Southern Hemisphere.  Oceans are hot.

First mass diversification of complex life.  Most modern phyla appear.  First eyes develop.  Arthropods dominate biomes.



c. 485 to 443 mya

Late Ordovician Map

Paleo-Tethys Ocean begins forming.  Ice age begins and causes mass extinction which ends period.

Complex life continues diversifying.  First large reefs appear. Mollusks proliferate and diversify.  Nautiloids are apex predators.  First fossils of land plants recovered from Ordovician sediments.  Period ends with first great mass extinction of complex life.



c. 443 to 419 mya

Mid-Silurian Map

Hot, shallow seas dominate biomes.  Climate and sea level changes cause minor extinctions

Reefs recover and expand.  Fish begin to develop jaws.  First invasions of land by animals.  First vascular plants appear



c. 419 to 359 mya

Late Devonian Map

Continents closing to form Pangaea, ice age begins at end of Devonian and cause mass extinction, possibly initiated by first forests sequestering carbon.

Fishes thrive.  First forests appear.  First vertebrates invade land.



c. 359 to 299 mya

Early Carboniferous Map

End-Carboniferous Map

Atmospheric oxygen levels highest ever, likely due to carbon sequestration by coal swamps.  Ice age increases in extent, causing collapse of rainforest

Sharks thrive.  Gigantic land arthropods.  First permanent land colonization by vertebrates.  Amphibians thriveReptiles appear.  Rainforests and swamps proliferate, forming most of Earth’s coal deposits.  Fungus appears that digests lignin



c. 299 to 252 mya

Late Permian Map

Tethys Ocean forms.  Oxygen levels drop.  Great mountain-building and volcanism as Pangaea forms, and its formation initiates the greatest mass extinction in eon of complex life.  Ice age ends.

Synapsid reptiles dominate land.  Conifer forests first appear.



c. 252 to 201 mya

Mid-Triassic Map

Pangaea begins to break upGreenhouse Earth begins and lasts the entire Mesozoic Era

Dinosaurs and mammals appear, and by the Triassic’s end, diapsid reptiles dominate land, sea, and airStony corals appear as reefs slowly recover.



c. 201 to 145 mya

Early Jurassic Map

Mid-Jurassic Map

Late Jurassic Map

Northern continents split from southern continents.  Atlantic Ocean begins to form.

Dinosaurs become gigantic.  First birds appear.



c. 145 to 66 mya

Mid-Cretaceous Map

End-Cretaceous Map

Sea levels dramatically rise.  Continents continue to separate.  Asteroid impact drives non-bird dinosaurs extinct and ends the Mesozoic Era.

Flowers first appear.  Chewing dinosaurs become prominent.  Forests near the polesRudist bivalves displace coral reefs, but go extinct before the end-Cretaceous extinction. 



c. 66 to 56 mya

Paleocene Climate Map

Greenhouse Earth conditions still prevail, and an anomalous warming occurred to end the epoch. 

Mammals grow and diversify to fill empty niches left behind by reptiles. 


c. 56 to 34 mya

Mid-Eocene Map

Late Eocene Map

Warmest epoch in hundreds of millions of years, but began cooling midway into epoch, beginning Icehouse Earth conditions.  Europe collides with Asia, and Asian mammals displace European mammals.

A Golden Age of Life on Earth, when life thrived all the way to the poles.  Whales appear.  Cooling in Late Eocene drives warm-climate species to extinction


c. 34 to 23 mya

Oligocene Climate Map

Cool epoch, as Antarctic ice sheets form, with warming at epoch’s end

Early whales die out, replaced by whales adapted to new ocean biomes. 



c. 23 to 5.3 mya

Mid-Miocene Map

First half of epoch is warm, then cools down

First half of epoch is warm, and called The Golden Age of Mammals.  Apes appear and spread throughout Africa and Eurasia.  Apes migrate back to Africa in cooling, while some remain in Southeast Asia.


c. 5.3 to 2.6 mya

Would appear nearly identical to today’s global map.

Earth continues to cool, and land bridge of North and South America initiates mass extinction of South American mammals and initiates current ice age

Bipedal apes appear.  First stone tools made at end of epoch. 



c. 2.6 mya to 12 kya

Early Pleistocene Map

Late-Pleistocene Map

Current ice age begins.

Mammals already cold-adapted, and relatively few extinctions, until the rise of humans


12 kya to present

Today’s Map

Interglacial period in current ice age, and recent and probably human-caused warming may extend the interglacial period

Mass extinctions of large animals happen wherever humans begin to appear. By the 21st century, the Sixth Mass Extinction in the eon of complex life appears to be underway, entirely caused by humans. 


Key Events before the Eon of Complex  Life



Sun forms

c. 4.6 bya

Earth forms

c. 4.56 bya

Moon forms

c. 4.53 bya

Continents begin to form

c. 4.0 bya

Life first appears (prokaryotic), common ancestor of all life on Earth lived*

c. 3.8 – 3.5 bya

Photosynthetic organisms first appear*

c. 3.4 bya

Oxygenic photosynthesis first appears*

c. 3.5 – 2.8 bya

Continents begin to markedly grow

c. 3.0 bya

First known glaciation

c. 3.0 – 2.9 bya

Great Oxygenation Event begins

c. 3.0 – 2.3 bya

Creation of banded iron formations removes iron from the oceans

c. 2.4 bya to 1.8 bya

First major ice age begins (snowball Earth event)

c. 2.4 to 2.1 bya

Aerobic respiration first appears

c. 2.4 – 1.8 bya

Complex cells first appear (eukaryotic)*

c. 2.1 – 1.6 bya

The “boring” billion years

c. 1.8 to 0.8 bya

First chloroplast created*

c. 1.6 – 0.6 bya

Sexual reproduction first appears*

c. 1.2 – 1.0 bya

Grazing of photosynthetic life forms begins

c. 1.0 bya

Dramatic climb in atmospheric oxygen, to eventually achieve modern levels, begins

c. 850-420 mya 

Banded iron formations reappear

c. 850 to 550 mya

First animal appears*

c. 760 to 665 mya

Second ice age (snowball Earth event)

c. 750 to 635 mya

Eon of complex life begins

c. 635 mya

First bilaterally symmetric animals appear*

c. 585-555 mya

Deep oceans oxygenated

c. 580 - 560 mya

Ediacaran fauna – Earth’s first large organisms – appear

c. 575 mya

Extinction of Ediacaran fauna

c. 542 mya

Cambrian Explosion begins

c. 541 mya

First eyes develop*

c. 540 mya

* This event is currently considered to have been unique, confined to one organism/instance.


The Formation and Early Development of the Sun and Earth

Chapter summary:

In the tables above, some dates have ranges as such old dates often have relatively thin evidence supporting them, which can be interpreted in different ways.  Those dates will be adjusted as the scientific evidence and theories develop.  As I was writing this essay, a study was published that may have pushed back the beginning of the Great Oxygenation Event by several hundred million years.[41]  Moving dates can change some theories of causation, but few scientists will dispute the idea that Earth’s atmosphere was primarily oxygenated by oxygenic photosynthesis.  It is the only plausible mechanism for that oxygenation event and Earth’s continuing high atmospheric oxygen content.[42] 

After the Big Bang, when matter began to coalesce, virtually all mass in the universe was contained in hydrogen atoms, with traces of the next two lightest elements: helium and lithium.  According to the Standard Model, atoms have no mass by themselves, but the field that gives rise to the Higgs Boson provides the mass.  Gravity attracted hydrogen atoms to each other and, where “clumps” of hydrogen became large enough, the pressure in the clump’s center (a star’s core) became great enough so that the mutual repulsion of the protons in hydrogen nuclei was overcome (like charges repel each other, while opposite charges attract), and the protons fused together.  That fusion released a great deal of primordial Big Bang energy, and fusion powers stars.

Depending on the star’s size and the resulting temperatures and pressures, various larger elemental nuclei can be produced.  Iron is the heaviest element created during a large star’s primary fusion process.  Nuclei larger than the simplest hydrogen nucleus contain neutrons as well as protons.  As the name implies, neutrons have no net electric charge, but have about the same mass as a proton (an electron has less than a thousandth the mass of a proton, so virtually all the mass in atoms is provided by its protons and neutrons).  Radioactive decay into daughter isotopes is mediated by the weak nuclear force.

In the smaller stars that eventually become white dwarfs, the primary fusion process creates oxygen as its heaviest element.  Even though the Sun is larger than about 95% of the Milky Way Galaxy’s stars, it is destined to become a white dwarf in about six or seven billion years.

Several different fusion processes have been identified, and stars from about half the size of the Sun to about nine times larger can undergo a process known as s-process fusion late in their lives, and that process has created about half of the elements heavier than iron; bismuth is the heaviest element created by the process.  Those heavier elements are eventually blown from the star by its stellar wind as it becomes a white dwarf.  Stars with more than nine times the mass of the Sun undergo a different process at the end of their lives.  When the hydrogen and helium fuel is used up and the fusion processes in those stars’ cores are reduced low enough, gravity will cause those stars to collapse in on themselves.  That collapse creates the pressures needed to fuse those other atoms heavier than iron, including the heaviest elements.  Uranium is the heaviest naturally produced element.  In an instant, r-process fusion occurs.  Depending on a collapsing star’s composition, it can collapse into a black hole or neutron star or explode into a supernova.

When a star becomes a supernova, those heavy elements are sprayed into the galactic neighborhood by a stupendous release of fusion energy.  Over the subsequent eons, gravity will cause the remnants of stars, and hydrogen that had not yet become a star or did not fuse within a star, to coalesce into an accretion disk, and a new star with its attendant planets will form.  The Sun will take more than ten billion years to live its life cycle before becoming a white dwarf.  Large stars burn much more quickly and can become supernovas after as little as ten million years of main-sequence burning.  The rule is: the larger the star, the shorter its life.

The accretion disk from which the Sun and its planets were formed appeared in a relatively short time, and the disk was originally a molecular cloud that may have been disturbed by an exploding star.  A "local" exploding star likely provided the bulk of our solar system's matter, and the entire mess gravitationally collapsed into the disk.  Earth’s age is estimated to be about 4.6 billion years, and formed fewer than 100 million years after the Sun did.  In a mere 50 million years after formation, the Sun became compressed enough to initiate the sustained fusion that still powers it and will for several billion more years.

Our solar system’s planets initially formed from clumps of heavier atoms, and the rocky planets formed in a region too hot for lighter elements and compounds to condense.  Oxygen and iron, those two largest products of main-sequence burning, comprise nearly two-thirds of Earth’s mass.

Just past our solar system’s “frost line,” the largest planet and first gas giant, Jupiter, formed.  In our solar system’s early days, smaller agglomerations of mass, called planetesimals, swarmed.  Those that began their lives inside the frost line were rocky, and those outside the frost line were generally comprised of lighter elements.  Those planetesimals bombarded the forming planets and increased their mass.  Other planetesimals were ejected from the solar system as the gravity of the Sun and planets whipped them around.  Today’s solar system provides mute evidence of that bombardment, as all rocky planets and moons are heavily cratered.  Earth’s geological processes have removed most evidence of that bombardment, but other rocky bodies have preserved the evidence.  It is thought that the bombardment of Earth by the planetesimals comprised of lighter elements provided the materials for Earth’s oceans and atmosphere.  Venus and Mars were also bombarded with the lighter elements and may have plentiful water long ago, but only Earth retained its water.  The biggest collision between Earth and its neighbors may well have created the Moon, and although the currently prevailing hypothesis has plenty of problems, the other hypotheses have more.  Moon rocks obtained by NASA’s Apollo missions show that the oldest parts of the Moon’s surface are about the same age as Earth

Today’s prevailing scientific theories consider stars to be the observable universe‘s energy centers.  According to today’s theories, 95% of the universe is not observable, as about 70% is dark energy and 25% is dark matter.  At this time, dark energy and dark matter have never been observed.  Any theory that relies on unobserved phenomena is going to be highly provisional, and I consider it unlikely that the prevailing cosmological theories a century from now will much resemble those of today.  The scale of the universe, from its largest to smallest objects, is truly difficult to imagine, and this animation can help provide some perspective.

The chemistry of Earth’s land, oceans, and atmosphere provides the raw material for life, but if the Sun disappeared tomorrow, Earth’s surface would quickly become a block of ice with an insignificant atmosphere.  Partly because humanity has not explored beyond our home star system, our planet is the universe’s only place officially acknowledged to host life as we know it. 

What is called geologic time is the calendar of Earth’s life cycle so far.  The scale of geologic time strains human brains with its immensity.  Writing about a geological period that “only” lasted 24 million years is part of the sometimes surreal experience of writing in terms of geologic time.  European geologists developed most of the calendar’s names in the 19th century, generally naming the timeframes after the locations where the first fossils of that time were discovered in their particular sedimentary layers.  Earth’s calendar has been divided into eons, eras, periods, epochs, and ages, and those categories are defined by the layers’ geological particulars, usually the discovered fossils.

The journey of life on Earth has been greatly affected by geophysical and geochemical processes as well as influences from beyond Earth, such as:



Those processes and events can interact with each other, and a few examples can provide an idea of the dynamics’ complexity.  What follows are today’s orthodox views, to the best of my knowledge, and they can certainly change in the future, perhaps even radically, just as cosmological and subatomic theories may change radically.  It seems to me, however, that geophysical and geochemical processes are understood better and have more robust data than many other areas of science, so geophysics and geochemistry are areas where I expect fewer radical changes than others.  Maybe that is because it is neither too big nor too small and closer to our daily reality than distant stars or what is happening inside atoms.

Volcanism can not only temporarily alter the atmosphere’s chemistry, but the ash from volcanism can also block sunlight from reaching Earth’s surface and lead to atmospheric cooling.[43]  Carbon dioxide vented by volcanism in the Mesozoic era is what made it so warm.  Tectonic plate movements can alter the circulation of the atmosphere and ocean.  When continental plates come together into a supercontinent, oceanic currents can fail and the oceans can become anoxic, as atmospheric oxygen is no longer drawn into the global ocean’s depths, which may have triggered numerous mass extinction events.[44]  When continents are near the poles, ice ages can appear, but in our current ice age the tipping point is variations in Earth’s orientation to the Sun, which is affected by, among other influences, the Moon. 

Tectonic plates can collide, such as the collision of India into Asia, which formed the Himalayan Mountains and raised the Tibetan Plateau.  That continuing event not only changed Earth’s weather patterns and influenced the monsoons’ formation, it also exposed a great deal of raw rock to the atmosphere and consequently removed atmospheric carbon dioxide through weathering, which in turn made the atmosphere cooler.  That may have contributed to the ice age that we currently experience, although other studies indicate that the carbon removal may have been more due to the burial of organic matterThe debate is continuing as the complex dynamics are subjected to scientific investigation.[45]  For all of the controversy over the dynamics, few scientists argue against the idea that atmospheric carbon dioxide has been falling, fairly consistently, since about 150-to-100 mya, from more than a thousand parts per million to the roughly 200-300 parts per million (“PPM”) of the past million years.  Nearly 35 million years ago (also written as “35 mya”), carbon dioxide levels fell below 600 PPM, when the Antarctic ice sheet began to form.[46]  During the current fossil fuel era, Earth’s atmosphere may reach 600 PPM again, or higher, in this century.  It is already nearly 400 PPM and rising fast.  Carbon dioxide levels are considered to be a primary variable affecting the temperature of Earth’s surface over the eons. 

Earth’s development has also been greatly impacted by life processes.  For instance, if hydrogen floats free in the atmosphere, Earth’s gravity is not strong enough to prevent it from escaping to space.  Ultraviolet light breaks water vapor into hydrogen and oxygen.[47]  If not for the high oxygen content of Earth’s atmosphere, Earth would have lost its oceans as all the hydrogen from split water molecules eventually drifted into space.  Scientists believe that that happened to Venus and Mars, although Venus may have never cooled enough to form liquid water; it split in the atmosphere and hydrogen then escaped to space.[48]  Without the ocean, there would not be life on Earth as we know it.  On Earth, that hydrogen liberated by ultraviolet light reacts with atmospheric oxygen and turns back into water before it can escape into space.[49]  The reason for free oxygen in the atmosphere is photosynthesis.  When comparing Earth’s tectonics to Venus’s, the formation of granite, continents, and setting the tectonic plates in motion appears to be due to Earth’s ocean.[50]  Plate tectonics are responsible for recycling elements through Earth’s crust and mantle, and the carbon cycle in particular has great import.  Photosynthesis led to atmospheric oxygen, which led to the ozone layer that helped prevent the splitting of water, and atmospheric oxygen recaptured hydrogen that would have otherwise escaped to space, which prevented the oceans from disappearing, which probably led to plate tectonics, which led to the formation of granitic continents, which led to land-based life.  In short, life made Earth more conducive to life.  That is the most important impact of life on geophysical and geochemical processes, but far from the only one; others will be explored in this essay. 

Geology in the West is considered to have begun during the Classic Greek period, and Persian and Chinese scholars furthered the discipline during the medieval period.  While volcanoes and geysers have always provided humanity with abundant evidence that Earth’s interior is hot, when humans began mining hydrocarbons and metals in abundance during the early days of industrialization, the collection of data about Earth’s subterranean temperature began.  It was not until my lifetime that some of Earth’s geological processes were understood well enough to begin mapping its energy flows.  Today’s most widely accepted hypothesis is that the energy provided by radioactive decay of elements such as potassium, uranium, and thorium is the primary heat source for Earth’s geological processes, and propels mass flows within Earth.  There is a constant upwelling of mass from the mantle, riding those energy currents.  When those flows reach Earth’s crust, the lighter portions float to Earth’s surface.[51]  Those portions eventually cool, become denser, and sink back into the mantle.  That process is thought to have begun about three billion years ago (also written as “three bya”), about the time that the continents began forming in earnest.  Three bya, the continents may have only had about a quarter of the mass that they do today.[52]  There are even recent ideas that life processes led to forming the continents.[53] 

The lightest portions of Earth’s crust, a relative wisp of Earth’s mass, make up the continents today, which are primarily made of lighter rocks such as granite, and the remainder of the crust is composed of denser rock such as basalt.  The granites formed when basalt was exposed to water, and the process partly replaced heavier iron with lighter sodium and potassium.  Earth is our solar system’s only known home of granite.  Water also became incorporated into the rocks, generally where the heavier oceanic crust was subducted below the lighter continental crust.[54]  It is thought today that the original global ocean had about twice the volume of today’s ocean.[55]  The “missing” ocean was incorporated into the crust and mantle, and helps make the granitic continents lighter so that they float on the heavier basaltic crust.  Granite is solely comprised of metallic oxides, and hydrated minerals abound in Earth’s continents.  Those continental masses have been floating across Earth’s surface for billions of years as they have collided with each other, rebounded, lifted, subducted below the crust, and recycled into the mantle.  Those tectonic plates have been likened to the surface of a pot of boiling oatmeal.  Plates can collide and form mountains, and they can pull apart and expose the hot interior, which spews out in volcanism (at the edges of tectonic plates, including ridges in the oceans).  Currently, it seems that there is a 500-million-year cycle whereby the continents crash together to form a supercontinent, then break apart and scatter across Earth’s surface before coming back together.[56]  Today, the continents are about 100 million years from their furthest projected spread across Earth's surface, when they will begin to come back together to form a supercontinent about 250 million years from now.

Earth’s volume is about one trillion cubic kilometers, its core is believed to be about 90% iron, and the rest is largely nickel.  The mantle is thought to be mostly oxygen and silicon, and the remainder is largely composed of the lighter alkali and alkaline earth metals, such as sodium, potassium, and calcium.  Those mantle metals are primarily bound in oxides.  The mantle makes up more than 80% of Earth’s volume.  The crust also is almost solely comprised of oxides.  Silicon dioxide (sand and glass are made from it) is the most prevalent compound and the crust is, by mass, nearly 75% oxygen and silicon (granite's primary constituent elements), and nearly all of the remainder is aluminum, iron, and those lighter alkali and alkaline earth metals.  All other elements combined amount to less than 2% of Earth’s crust.  An accompanying table presents the current estimates of the relative concentrations of Earth’s mass and atoms that are relevant to this essay.[57] 

The oceans and atmosphere amount to a tiny portion of Earth’s mass and are made of light elements and compounds with low boiling points as compared to crustal compounds.  The oceans are primarily comprised of water, and that water contains most of Earth’s hydrogen.  On Earth, about 1-in-5,000 atoms are hydrogen, but 63% of the human body’s atoms are hydrogen.  Carbon and nitrogen are also scarce Earth elements, but they total more than 10% of the human body’s atoms; life is made of rare Earth elements.  What geochemists call the biosphere (comprised of all living organisms; biologists call it biomass) amounts to less than one billionth of Earth’s mass.  Land-based biomass is about 500 times greater than ocean-based biomass.  Life as we know it seems to be rare and delicate, found nowhere else in our solar system so far, and few places seem promising for it to exist.  Below is a graphical representation of the relationship of Earth’s mass to the masses of the ocean, atmosphere, and biosphere. 

Earth receives less than one-billionth of the energy that the Sun produces.  The above image of the biosphere’s proportion of Earth’s mass is close to the proportion of the Sun’s energy that Earth receives (the largest sphere indicates the Sun’s output, and that small green dot indicates the proportion of the Sun’s output that Earth receives).  About 0.02% of the Sun’s energy that reaches Earth is captured by photosynthesis (that tiny dot would be invisible in that diagram).  That infinitesimal proportion captured by photosynthesis is the basis for nearly all life on Earth. 

Earth’s iron core gives rise to its pronounced magnetic field, which helps protect Earth’s surface from the solar wind.  Planets with weak magnetic fields, such as Mars, are believed to be vulnerable to the solar wind stripping away their atmospheres.  If Earth did not have a magnetic field, its ozone layer may have been stripped away, which may have led to the extinction of complex life on Earth, if it would have ever appeared at all.

The fact that complex life exists on Earth seems to be a miracle of circumstance.  From the life of the Sun, to the part of our galaxy where our solar system resides, to the dynamics that led to Earth retaining her global ocean and having an ozone layer, to the molten core and magnetic field that protects Earth’s surface, life on Earth may be far rarer in the universe than it seems from the perspective of a species that has yet to visit other stars.[58] 

For the first 500 million years of Earth’s life, called the Hadean Eon, it was hot and bombarded by planetesimals.  A naked human would not have survived for a minute on the Hadean Earth.  The atmosphere held no oxygen, the ocean’s temperature was higher than today’s boiling point of water, and there was little if any land to stand on.  Earth’s surface was regularly bombarded by comets and asteroids, and the larger collisions vaporized the ocean, which would then condense and settle back in the greatest rains in Earth’s history.  The Moon was probably created during the Hadean Eon when a planet-sized mass collided with Earth.  The oldest known “native” rocks on Earth date from the Hadean Eon’s end, four bya.  The Hadean atmosphere may have been like Venus’s today – almost all carbon dioxide and at an immensely higher pressure than today’s atmosphere, although this is controversial today and recent evidence favors far lower carbon dioxide levels, at least in the Archaean, which was the next eon.[59]  The continents probably began forming during the Archaean Eon (although as with many ancient events like that, there are competing hypotheses with various levels of acceptance, and one of them is that the continents were fully formed by 4 bya), and is likely when life as we know it first appeared on Earth.  At the Archaean Eon’s beginning, the chemistry of the oceans and atmosphere would have been unfamiliar to us, and would not have supported today's animal life because there was no free oxygen in the atmosphere or oceans.  The global ocean may have been full of dissolved iron and other minerals not prevalent in today’s ocean.  The environment that life first appeared in would have been highly hostile to today’s multicellular life forms, and those early life forms were tough.


Early Life on Earth

Chapter summary:

Above all else, life is an energy acquisition process.  All life exploits the potential energy in various atomic and molecular arrangements, or captures energy directly, as in photosynthesis.  Early life exploited the potential energy of chemicals.  The chemosynthetic ideal is capturing chemicals fresh to new environments that have yet to react with other chemicals.  The currently most-accepted hypothesis has life first appearing on Earth about 3.5-3.8 bya, probably in volcanic vents on the ocean floor.[60]  The earliest life forms took advantage of fresh chemicals introduced to the oceans.  Life had to be opportunistic and quick in order to capture that energy before other molecules did.

Today’s mainstream science has nothing to say about any intent behind the appearance of life on Earth.  Today’s science pursues the physical mechanism.  When life first appeared on Earth, the evolutionary process that led to humanity began.  The USA's population has more doubt about evolution than any other Western nation, and that is primarily because Biblical literalism is still strong here.  In all other Western nations, there is virtually no controversy over evolution being a fact of existence, and those nations view the controversy over evolution in the USA with befuddlement.  Enlightened scientists will state that science’s story of evolution is one of process and history, not intent, and really has nothing to say about a creator.[61]

There is no scientific consensus regarding how life first appeared, but it is currently thought that all life on Earth today descended from one organism, a creature known today as the Last Universal Common Ancestor (“LUCA”).[62]  The reasoning is partly that all life has a preference for using certain types of molecules.  Many molecules with the same atomic structure can form mirror images of themselves.  That mirror-image phenomenon is called chirality.  In nature, such mirror images occur randomly, but life prefers one mirror image over the other.  In all life on Earth, proteins are virtually without exception left-handed, while sugars are right-handed.  If there was more than one line of descent, life with different “handedness” would be expected, but it has never been found, which has led scientists to think that LUCA is the only survivor that spawned all life on Earth today.  All other lineages died out (the likely answer, and there was probably hundreds of millions of years of evolution on Earth before LUCA lived), or they may have all descended from the same original organism.  As we will see, this is far from the only instance when such seminal events are considered to have probably happened only once.  Also, the unique structure of DNA and many enzymes are common to all life, and they did not have to form the way that they did.  That they came through different ancestral lines is extremely unlikely.

The critical feature of earliest life had to be a way to reproduce itself, and DNA is common to all cellular life today.  The DNA that exists today was almost certainly not a feature of the first life.  The most accepted hypothesis is that RNA is DNA’s ancestor.  The mechanism today is that DNA makes RNA, and RNA makes proteins.  DNA, RNA, proteins, sugars, and fats are the most important molecules in life forms, and very early on, protein “learned” the most important trick of all, which was an energy innovation: facilitate biological reactions.  If we think about activation energy at the molecular level, it is the energy that crashes molecules into each other, and if they are crashed into each other fast enough and hard enough, the reaction becomes more likely.  But that is an incredibly inefficient way to do it.  It is like putting a key in a room with a lock in a door and shaking up the room in the hope that the key will insert itself into the lock during one of its collisions with the room’s walls.  Proteins make the process far easier, and those proteins are called enzymes.

Enzymes speed up chemical reactions and they do it as in the above analogy but as if a person entered that room, picked up the key, and inserted it into the lock.  That took far less effort than shaking up the room a million times.  Enzymes are like hands that grab two molecules and bring them into alignment so that the key inserts into the lock.  The lock-and-key analogy is the standard way to explain enzymes to non-scientists.  Enzymes make chemical reactions happen millions and even billions of times faster than they would occur in the enzymes’ absence.  Life would never have grown beyond some microscopic curiosities without the assistance that enzymes provide.  Almost all enzymes are proteins, which are generally huge molecules with intricate folds.  The animation of human glyoxalase below depicts a standard enzyme (author is WillowW at Wikipedia, and the zinc ions that make it work are the purple balls).


Enzymes look like Rube Goldberg-ish contraptions when their function is considered: huge molecules are used to make small ones interact.  Proteins have a four-level structure, and the second level is held in place by hydrogen bonds.  The enzyme’s pair of “hands” is like that of a robot on an assembly line, putting two parts together and passing the assembly to the next stage.  An enzyme can catalyze millions of reactions per second.  All of today’s life on Earth would cease to exist in the absence of enzymes.  Other than the ability to reproduce itself and produce proteins, speeding up reactions by millions of times is life’s most important “trick” and its greatest energy innovation.  Adenosine triphosphate ("ATP") is a coenzyme used to fuel all known biological processes.  The human body produces its own weight in ATP each day.  Poisons and drugs generally disable enzymes by plugging or wrecking the “lock” so that the intended “key” will not fit.  Cyanide kills by disabling a key enzyme that produces ATP, which induces an energy shortage at the cellular level.

Another vital invention of life is creating the “room” in which those reactions can take place.  The “rooms” of the first life forms were created by membranes, which are comprised of proteins and fats.  As with the first RNA, DNA, and proteins, the first membranes probably did not resemble today’s very much.  Membranes define life, keeping it separate from other molecules in Earth’s brew.

There are two primary aspects of life, and what can be observed in human civilization are often only more complex iterations of those aspects, which are:


1.      Life harnessed energy so that it could manipulate matter to create itself;

2.      Life created information so that it could reproduce itself.


One aspect manipulated matter and energy, and the other was the “program” for manipulating it.  Matter and energy could be manipulated to either build a living structure or operate it (or disassemble it), and the organism always made the “decision.”

Entropy is another important concept for this essay.  Entropy is, in its essence, the tendency of hot things to cool off.  The concept is now introduced to students as energy dispersal.  Even though science really does not know what energy is, it can measure its effect.  At the molecular level, entropy is the tendency of mass to become disordered over time, as the random motion of molecules spreads in collisions with other molecules, until the interacting molecules have the same temperature.  Life had to overcome entropy in order to exist, as it brought order out of disorder and maintained it while alive, and it takes energy to do that.  The prevailing theory is that net entropy can only increase, and life has to create more entropy in its surroundings so that it can reduce entropy internally and produce and maintain the order that sustains itself.  Life is called a negentropic phenomenon, in which it uses energy to reverse entropy to make the order of its organism’s structures, and it is continually using energy to reverse the natural entropy that is called decay.[63]

Of those key elements necessary for life as we know it, the most diverse is carbon, with that half-filled outer electron shell.  Carbon provides the “backbone” for life’s chemistry, and is the foundational element of DNA, RNA, sugars, proteins, fats, and virtually all other components of life.  Carbon can form one, two, three, and four bonds with itself and so forms the most diverse bonds with itself of all elements, and an entire branch of chemistry is devoted to carbon, called organic chemistry.  Organic molecules are by far the largest known to science.  During my first day of organic chemistry class, the professor observed that because the primary use of hydrocarbons was burning them to fuel the industrial age, we were living in “the age of waste,” as hydrocarbons are a treasure trove of raw materials.  In the eyes of an organic chemist, burning fossil hydrocarbons to fuel our industrial world is like making Einstein dig ditches or making Pavarotti wash dishes for a living.

Nitrogen and phosphorus are the most vital elements for life after carbon, hydrogen, and oxygen.  In its pure state in nature, nitrogen, like hydrogen and oxygen, is a diatomic molecule.  Hydrogen in nature is single-bonded to itself, oxygen is double-bonded, and nitrogen is triple-bonded.  Because of that triple bond, nitrogen is quite unreactive and prefers to stay bonded to itself.  In nature, nitrogen will not significantly react with other substances unless the temperature (activation energy) is very high.  Most nitrogen compounds in nature are created when the nitrogen and oxygen that comprise more than 99% of Earth’s atmosphere react under lightning’s influence to create nitric oxide, which then reacts with oxygen to form nitrogen dioxide, and atmospheric water combines with that to make nitrous and nitric acids, which then fall to Earth’s surface in precipitation.  Certain kinds of bacteria “fix” the nitrogen from the acidic rain into biological systems.  Also, some bacteria can fix nitrogen directly from atmospheric nitrogen, but it is an energy-intensive operation that uses the energy in eight ATP molecules to fix each atom of nitrogen.  For the earliest life on Earth, nitrogen would have been essential, and some nitrogen is fixed at volcanic vents, where life may have first appeared

The nitrogen cycle is one of life’s most important, in which some bacteria fix nitrogen for biological use and others release nitrogen back to the atmosphere.  Nitrogen’s relatively inert nature and preference for being bonded to itself is why it is the dominant atmospheric gas, at 78% of the atmosphere’s volume.  It has held that dominant status for billions of years. 

Carbon dioxide, on the other hand, has been generally decreasing as an atmospheric gas for billions of years, and has consistently declined for the past 100-150 million years.  The geochemical process is like nitrogen's in that atmospheric water combines with carbon dioxide to form a weak acid, which then falls to Earth in precipitation.  But carbon is in the same elemental family as an abundant crustal element: siliconCarbon replaces the silicon in crustal compounds and turns silicates into carbonates in a process called silicate weathering.[64]  Most of Earth’s primordial carbon dioxide was probably removed by this process, although the exact mechanisms are in dispute.  In all paleoclimate studies, carbon dioxide is a prominent variable, if not the prominent variable, for determining Earth’s surface temperature.  But perhaps as early as three bya, life became a significant source of carbon removal from the atmosphere, as life forms died and sank to the ocean floor, were subsequently buried by sedimentation, and tectonic plate movements further buried them into Earth’s crust and mantle.

More carbon dioxide was removed from the atmosphere by those processes than was reintroduced to the atmosphere by volcanism and other processes.  That removal and reintroduction of carbon to Earth’s surface is called the carbon cycle.  As carbon dioxide continues to be removed from the atmosphere, life will have a harder time surviving, to eventually go extinct, as first plants, then animals decline and go extinct, and it will be back to microbes ruling the Earth until the Sun’s expansion into a red giant destroys Earth.[65]  The earthly end of complex life’s reign may be a billion years away, but might come much sooner.

When life first appeared, it was single-celled and simple, and such organisms are called prokaryotes today.  Below is a diagram of a typical prokaryotic cell.  (Source: Wikimedia Commons)

The diagrams used in this chapter are only intended to provide a glimpse of the incredible complexity of structure and chemistry that takes place at the microscopic level in organisms, and people can be forgiven for doubting that it is all a miraculous accident.  I doubt it, too, as did Einstein.  Prokaryotes do not have organelles such as mitochondria, chloroplasts, and nuclei, but even the simplest cell is a marvel of complexity.  If we could shrink ourselves so that we could stand inside an average bacterium, we would be astounded at its complexity, as molecules move here and there, are brought inside the bacterium’s membrane, used to generate energy and build structures, and waste products are ejected from the organism.  Cellular division would be an amazing sight.

The most significant branch of evolution’s tree of life may have been the first, when bacteria split into two branches; one branch is called Bacteria and the other is Archaea.  Darwin’s notion of slowly accumulating differences through descending organisms gradually leading to new species is confounded at the single-celled level in particular, as microbes swap DNA with abandon.  The so-called tree of life at the microbe level better resembles a web.[66]  The classifications in the evolutionary tree of life are by no means settled, with constant disputes and changes, but every scientist still thinks that it is a tree, with perhaps some webby roots.

In the earliest days of life on Earth, it had to solve the problems of how to reproduce, how to separate itself from its environment, how to acquire raw materials, and how to make the chemical reactions that it needed.  But it was confined to those areas where it could take advantage of briefly available potential energy as Earth’s interior was disgorged into the oceans.  The earliest process of skimming energy from energy gradients to power life is called respiration.  That earliest respiration is today called anaerobic respiration because there was virtually no free oxygen in the atmosphere or ocean in those early days.  Respiration was life’s first energy cycle.[67]  A biological energy cycle begins by harvesting an energy gradient (usually by a proton crossing a membrane or, in photosynthesis, directly capturing photon energy), and the acquired energy powered chemical reactions.  The cycle then proceeds in steps, and the reaction products of each step sequentially use a little more energy from the initial capture until the initial energy has been depleted and the cycle’s molecules are returned to their starting point and ready for a fresh influx of energy to repeat the cycle.

Back in life’s early days, some creatures discovered another source of energy and nutrients besides the chemical brew of volcanic vents: other life forms.  Predation was then born.[68]  Evolution has plenty to answer for, and opportunistically robbing creatures of their lives to eat them is perhaps evolution’s primary “negative” outcome.

The evidence is that after “only” 100 million years or so after LUCA lived, life learned its next most important trick after learning how to exist and speed up reactions: it tapped a new energy source.  Photosynthesis may have begun 3.4 bya.  Bacteria are true photosynthesizers that fix carbon from captured sunlight.  Archaeans cannot fix carbon via sunlight capture, so are not photosynthesizers, even those that capture photons. 

As with other early life processes, the first photosynthetic process was different from today’s, but the important result – capturing sunlight to power biological processes – was the same.  The scientific consensus today is that a respiration cycle was modified, and a cytochrome in a respiration system was used for capturing sunlight.  Intermediate stages have been hypothesized, including the cytochrome using a pigment to create a shield to absorb ultraviolet light, or that the pigment was part of an infrared sensor (for locating volcanic vents).  But whatever the case was, the conversion of a respiration system into a photosynthetic system is considered to have only happened once, and all photosynthesizers descended from that original innovation.[69]

Metals used by biological processes can donate electrons, unlike those other elements that primarily seek them to complete their shells.  Those metals used by life are isolated in molecular cages called porphyrins

As with enzymes, the molecules used in biological processes are often huge and complex, but ATP energy drives all processes and that energy came from either potential chemical energy in Earth’s interior or sunlight, but even chemosynthetic organisms rely on sunlight to provide their energy.[70]  The Sun thus powers all life on Earth.  The cycles that capture energy (photosynthesis or chemosynthesis) or produce it (fermentation or respiration) generally have many steps in them, and some cycles can run backwards, such as the Krebs cycle.[71]  Below is a diagram of the citric acid (Krebs) cycle.  (Source: Wikimedia Commons)

The respiration and photosynthesis cycles in complex organisms have been the focus of a great deal of scientific effort, and cyclic diagrams (1, 2) can provide helpful portrayals of how cycles work.  Photosynthesis has several cycles in it, and Nobel Prizes were awarded to the scientists who helped describe the cycles.[72]  Chlorophyll molecules look like antennae, with magnesium in their porphyrin cages, and long tails.  Below is a diagram of a chlorophyll molecule.  (Source: Wikimedia Commons)

Those molecules initiate photosynthesis by trapping photons.  Chlorophyll is called a pigment and, as it sits in its “antennae complex,” it only absorbs wavelengths of light that boost its electrons into higher orbits.  The wavelengths that plant chlorophyll does not absorb well are in the green range, which is why plants are green.  Some photosynthetic bacteria absorb green light, so the bacteria appear purple, and there are many similar variations among bacteria.  Those initial higher electron orbits from photon capture are not stable and would soon collapse back to their lower levels and emit light again, defeating the process, but in less than a trillionth of a second the electron is stripped from the capturing molecule and put into another molecule with a more stable orbit.  That pathway of carrying the electron that got “excited” by the captured photon is called an electron transport chain.  Separating protons from electrons via chemical reactions, and then using their resultant electrical potential to drive mechanical processes, is how life works. 

Early photosynthetic organisms used the energy of captured photons to strip electrons from various chemicals.  Hydrogen sulfide was an early electron donor.  In the early days of photosynthetic life, there was no atmospheric oxygen.  Oxygen, as reactive as it is, was deadly to those early bacteria and archaea, damaging their molecules through oxidization.  Oxidative stress, or the stripping of electrons from life’s molecules, has been a problem since the early days of life on Earth.[73]  Oxidative stress is partly responsible for how organisms age, but it can also be beneficial, as organisms use oxidative stress in various ways.

The dates are controversial, but it appears that after hundreds of millions of years of using various molecules as electron donors for photosynthesis, cyanobacteria began to split water to get the donor electron, and oxygen was the waste byproduct.  Cyanobacterial colonies are dated to as early as 2.8 bya, and it is speculated that oxygenic photosynthesis may have appeared as early as 3.5 bya and then spread throughout the oceans.  Those cyanobacterial colonies formed the first fossils in the geologic record, called stromatolites.  At Shark Bay in Australia and some other places the water is too saline to support animals that can eat cyanobacteria, so stromatolites still exist and give us a glimpse into early life on Earth.

Oxygenic photosynthesis uses two systems for capturing photons.  The first one (called Photosystem II) uses captured photon energy to make ATP.  The second one (called Photosystem I because it was discovered before Photosystem II) uses captured photon energy to add an electron to captured carbon dioxide to help transform it into a sugar.  That “carbon fixation” is accomplished by the Calvin Cycle, and an enzyme called Rubisco, Earth’s most abundant protein, catalyzes that fixation.  Below is a diagram of the Calvin cycle.  (Source: Wikimedia Commons)

Some bacteria use Photosystem I and some use Photosystem II.  More than two bya, and maybe more than three bya, cyanobacteria used both, and a miraculous instance of innovation tied them together.  Some manganese atoms were then used to strip electrons from water.  Although the issue is still controversial regarding when it happened and how, that instance of cyanobacteria's using manganese to strip electrons from water is responsible for oxygenic photosynthesis.  It seems that some enzymes that use manganese may have been "drafted" into forming the manganese cluster responsible for splitting water in oxygenic photosynthesis.[74]  Water is not an easy molecule to strip an electron from, a single cyanobacterium seems to have “stumbled” into it, and it probably happened only once.[75]  Once an electron was stripped away from water in Photosystem I, then stripping away a proton (a hydrogen nucleus) essentially removed one hydrogen atom from the water molecule.  That proton was then used to drive a “turbine” that manufactures ATP, and wonderful animations on the Internet show how those protons drive that enzyme turbine (ATP synthase).  Oxygen is a waste product of that innovative ATP factory.

Below is a diagram of the photosynthetic process in grass.  (Source: Wikimedia Commons)

photosynthesis.jpg (106103 bytes)Click on image to enlarge

About the time that the continents began to grow and plate tectonics began, Earth produced its first known glaciers, between 3.0 and 2.9 bya, although the full extent is unknown.  It might have been an ice age or merely some mountain glaciation.[76]  The dynamics of ice ages are complex and controversial, and numerous competing hypotheses try to explain what produced them.  Because the evidence is relatively thin, there is also controversy about the extent of Earth's ice ages.  About 2.5 bya, the Sun was probably a little smaller and only about 80% as bright as it is today, and Earth would have been a block of ice if not for the atmosphere’s carbon dioxide and methane that absorbed electromagnetic radiation, particularly in the infrared portion of the spectrum.  But life may well have been involved, particularly oxygenic photosynthesis, and it was almost certainly involved in Earth's first great ice age, which may have been a Snowball Earth episode, and some pertinent dynamics follow. 

As oxygenic photosynthesis spread through the oceans, everything that could be oxidized by oxygen was, during what is called the Great Oxygenation Event (“GOE”), although there may have been multiple dramatic events.  The event began as long as three bya and is responsible for most of Earth’s minerals.  The ancient carbon cycle included volcanoes spewing a number of gases into the atmosphere, including hydrogen sulfide, sulfur dioxide, and hydrogen, but carbon dioxide was particularly important.  When the continents began forming, carbon dioxide was removed from the atmosphere via water capturing it, falling onto the land masses as carbonic acid, the carbon became combined into calcium carbonate, and plate tectonics subducted the calcium carbonate in the ocean sediments into the crust, which was again released as carbon dioxide in volcanoes.[77]

When cyanobacteria began using water in photosynthesis, carbon was captured and oxygen released, which began the oxygenation of Earth's atmosphere.  But the process may have not always been a story of continually increasing atmospheric oxygen.  There may have been wild swings.  Although the process is indirect, oxygen levels are influenced by the balance of carbon and other elements being buried in ocean sediments.  If carbon is buried in sediments faster than it is introduced to the atmosphere, oxygen levels will increase.  Pyrite is comprised of iron and sulfur, but in the presence of oxygen, pyrite's iron combines with oxygen (and becomes iron oxide, also known as rust) and the sulfur forms sulfuric acid.  Pyrite burial may have acted as the dominant oxygen source before carbon burial did.[78]  There is sulfur isotope evidence that Earth had almost no atmospheric oxygen before 2.5 bya.[79]

About 2.7 bya, dissolved iron in anoxic oceans seems to have begun reacting with oxygen at the surface, generated by cyanobacteria.  The dissolved iron was oxidized from a soluble form to an insoluble one, which then precipitated out of the oceans in those vivid red (the color of rust) layers that we see today and are called banded iron formations ("BIFs"), which became an oxygen sink and kept atmospheric oxygen low.[80]  The GOE is widely accepted to have created almost all of the BIFs, but it is not the only BIF-formation hypothesis and there is a great deal of controversy, but life processes are generally considered to be primarily responsible for forming the BIFs.[81]  Most iron in the crust is bound in silicates and carbonates, and it takes a great deal of energy to extract the iron from those minerals; the oxides that comprise BIFs are much less energy-intensive to refine, as the iron is so concentrated.  Far less ore needs to be melted to get an equivalent amount of iron.  BIFs are the source of virtually all iron ore that humans have mined.  Life processes almost certainly performed the initial work of refining iron, and humans easily finished the job billions of years later.  Copper was not refined by life processes, and copper ore takes twice as much energy to refine as iron ore does. 

When BIF deposition ended about 2.4 bya (maybe because all of the available iron had been removed), oxygen levels then skyrocketed and may have even reached modern levels, although it may have only been a few percent of Earth's atmosphere, but was substantially higher than it had ever been.[82]  Not coincidentally, Earth experienced its first definite ice age, beginning 2.4 bya.

Earth's Venus-level carbon dioxide likely began declining during the Hadean Eon, and the GOE also removed methane from the atmosphere (a methane molecule is more than 20 times as effective as a carbon dioxide molecule in absorbing radiation in Earth’s atmosphere), which may have been created by methanogens (methane-producing archaea), and Earth’s first ice age lasted for 300 million years.[83]  There is no scientific consensus regarding the exact dynamics that caused that first ice age (although I consider the above dynamics persuasive and likely relevant), but there is general agreement that it was ultimately due to reduced greenhouse gases.  That first ice age might have been a “Snowball Earth” event, in which Earth’s surface was almost completely covered in ice.

The high oxygen levels may have turned pyrite on the continents into acid, which increased erosion, flooded essential nutrients, particularly phosphorus, into the oceans, and would have facilitated a huge bloom in the oceans.[84]  But this also happened in the midst of Earth's first ice age, so increased glacial erosion may have been primarily responsible, as we will see with a Snowball Earth that happened more than a billion years later.  The two largest carbon-isotope excursions (carbon 13/12) in Earth's history are related to ice ages.  The first was a positive excursion (more carbon-13 than expected), and the second was negative.  Scientists are still trying to determine what caused them.  Beginning a little less than 2.3 bya and lasting for more than 200 million years is the Lomagundi excursion, in which there was great carbon burial.[85]  When the Lomagundi excursion finished, oxygen levels seem to have crashed back down to almost nothing and may have stayed that way for 200 million years, before rebounding to a few percent, at most, of Earth's atmosphere, and it stayed around that low level for more than a billion years.[86] 

Atmospheric oxygen prevented Earth from losing its water as Venus and Mars did, which saved all life on Earth.  An atmosphere of as little as two percent oxygen may have been adequate to form the ozone layer, and that level was likely first attained during the first GOE.[87]  The ozone layer absorbs most of the Sun’s ultraviolet light that reaches Earth.  Ultraviolet light carries more energy than visible light and breaks covalent and other bonds and wreaks biological havoc, particularly to DNA and RNA.  Before the ozone layer formed, life would have had a challenging time surviving near the ocean’s surface.  Ultraviolet light damage presented a formidable evolutionary hurdle, and proteins and enzymes that assist cellular division are like those that arose to repair damaged DNA.  Life has adapted to many hostile conditions in Earth’s past, but if conditions change too rapidly, life cannot adapt in time to survive.  Many mass extinctions that dot Earth’s past were probably the result of conditions changing too rapidly for most organisms to adapt, if they could have adapted at all.  During the Permian-Triassic extinction event, which was the greatest extinction event yet known, there is evidence that the ozone layer was depleted and ultraviolet light damaged photosynthesizing organisms that formed the base of the food chains.  From the formation of stromatolites to mass extinction events, ultraviolet light has played a role.[88]

Around the end of that first ice age, another unique event transpired with enormous portent for life’s journey on Earth: one microbe enveloped another, and both lived.  Today's prevailing hypothesis is that an archaean enveloped a bacterium, either by predation or colonization, and they entered into a symbiotic relationship.  Today’s leading hypothesis, called the hydrogen hypothesis, is that the archaean consumed hydrogen and the bacterium produced hydrogen, which formed the basis for their symbiosis.[89]  That unique event transpired around two bya and led to complex life on Earth.[90]  That enveloped bacterium was the parent of all mitochondria on Earth today, which are the primary energy-generation centers in all animals.  About 10% of the human body’s weight is mitochondria.[91]  If not for the red of hemoglobin and the melanin in skin, humans would look purple, which is the mitochondria’s color.  That purple color is probably because the original enveloped bacterium that led to the first mitochondrion was purple.[92] 

The mitochondrion’s creation had impact far beyond “only” creating “power plants” in cells; it allowed cells to grow to immense size.  That first mitochondrion became, according to the most restricted definition, the first organelle.  Cells with organelles are called eukaryotes, and today they are generally thought to have descended from that instance when a hydrogen-eating archaean enveloped a hydrogen-producing bacterium.  That animation of ATP Synthase in action depicts a typical event in life forms - the generation of energy as protons cross a membrane - which in that instance makes the turbine rotate that manufactures ATP.  For prokaryotes, the cellular membrane is their only one and the site of the process that fuels their lives.  Cells are three-dimensional entities, and if spherical, the cell’s volume will increase at the cube of its diameter, while its cellular membrane only grows at the square of its diameter.  If the diameter of a spherical bacterium is doubled, its surface area increases four times, but its volume increases eight times, and the disparity between surface area and volume increases as the diameter does.[93]  For a prokaryote, it means that the cytoplasm-to-membrane ratio quickly shrinks as the cell grows, so that less ATP is serving more cytoplasm.  That means that with increasing size comes slower metabolism, so the cell becomes sluggish.  Imagine a grown man trying to live on the calories that he ingested when he was an infant.  He would quickly starve to death or have to hibernate each day.

Prokaryotic cells are limited in size because their energy production only takes place at their cellular membranes.  In ecosystems, the race usually goes to the quick, and it is very true with bacteria, as the smallest bacteria are faster and “win” the race of survival.[94]  Mitochondria increase the membrane surface area for ATP reactions to take place, which allowed cells to grow in size.  The average eukaryotic cell has more than 10 thousand times the mass of the average prokaryotic cell, and the largest eukaryotic cells have hundreds of thousands of times the mass (or around a trillion times for ostrich eggs, for instance, which exist as single-cells when formed).  Where an organism has the greatest energy needs, such as in muscle and nerve cells, the greatest numbers of mitochondria are found.  In a typical animal cell, dotted with hundreds of mitochondria, a single mitochondrion is the size of the prokaryote that became the mitochondrion, and is representative of prokaryote size in general.  That increased surface area to generate ATP allowed eukaryotic cells to grow large and complex.  There are quintillions (a million trillion) of those ATP Synthase motors in a human body, spinning at up to hundreds of revolutions per second, generating ATP molecules.[95] 

It can help to think of mitochondria as “distributed” energy generation centers in eukaryotes, versus the “perimeter” energy generation in prokaryotes.  The new mode of energy production presented various challenges, but it allowed life to become large and complex.  Size is important, at the cellular level as well as the organism level.  Below is a diagram of a typical plant cell.  (Source: Wikimedia Commons)

The primary advantage that mitochondria provided was not only increased surface area for reactions, but unlike other organelles that began as bacteria (such as hydrogenosomes), mitochondria retained some of their DNA.[96]  That DNA was probably retained by mitochondria that could make key proteins vital to their functioning on the spot, instead of waiting for the nucleus to send DNA “instructions.”  Essentially, mitochondria provided flexible power generation, like a field commander empowered to make decisions far from headquarters and quickly responding to conditions on the ground.  Mitochondria move around inside the cells and provide energy where it is needed.  That flexibility of decentralized power generation may be the mitochondrion’s chief contribution to making complex life possible, and that in turn led to many changes that are characteristic of complex life, some of which follow. 

Perhaps a few hundred million years after the first mitochondrion appeared, as the oceanic oxygen content, at least on the surface, increased as a result of oxygenic photosynthesis, those complex cells learned to use oxygen instead of hydrogen.  It is difficult to overstate the importance of learning to use oxygen in respiration, called aerobic respiration.  Before the appearance of aerobic respiration, life generated energy via anaerobic respiration and fermentation.  Because oxygen is in second place for creating the most energetic reactions, aerobic respiration generates, on average, about 15 times as many ATP molecules per cycle as fermentation and anaerobic respiration do (although some types of anaerobic respiration can get four times the typical ATP yield).[97]  The suite of complex life on Earth today would not have been possible without the energy provided by oxygenic respiration.  At minimum, nothing could have flown, and any animal life that might have evolved would have never left the oceans because the atmosphere would not have been breathable.  With the advent of aerobic respiration, food chains became possible, as it is several times as efficient as anaerobic respiration and fermentation (about 40% as compared to less than 10%).  Today’s food chains of several levels would be constrained to about two in the absence of oxygen.[98]  Some scientists have questioned oxygen's role in the rise of complex life and oxygen and respiration in eukaryote evolution.  Whether the first animals needed oxygen at all is controversial.[99]

Complex life means, by definition, that it has many parts and they move.  Complex life needs energy to run its many moving parts.  Complexity’s dependence on greater levels of energy use not only applies to all organisms and ecosystems, but it has also applied to all human civilizations, as will be explored later in this essay.  When cells became “complex” with organelles, a tiny observer inside that cell would have witnessed a bewildering display of activity, as mitochondria sailed through the cells via cytoskeleton “scaffolding” on their energy generating missions, the ingestion of molecules for fuel and to create structures, the miracle of cellular division, the constant building, repair, and dismantling of cellular structures, and the ejection of waste through the cellular membrane.[100]  The movement of molecules and organelles in eukaryotic cells is accomplished by using the same protein that became muscle: actin.[101]  Prokaryotes used an ancestor of actin to move, and their flagella provide their main mode of travel, to usually move toward food and safety or away from danger, including predators.

For various reasons that are far from settled among scientists, eukaryotes did not immediately rise to dominance on Earth but were on a fairly even footing with prokaryotes for more than a billion years.  That situation was at least partially related to continental configurations and oceanic currents.

The Moon seems to have stabilized Earth’s axial tilt in relation to the Sun and made Earth's seasons vary within a relatively narrow range.  Without the Moon, Earth could have up to 90o changes in its axis of rotation instead of the 22o-to-24.5o variation of the past several million years.[102]  If that had happened, although life may have survived, Earth’s climate would have been extremely chaotic, with part of the planet going into perpetual day while another went into perpetual night, and other wild variations.  Earth would have had mass-extinction effects on those portions, and the rest of the biosphere would have been extremely challenged to survive.  Complex life on Earth would little resemble today’s (if it had appeared and survived at all), if Earth’s axis tilted chaotically and severely.  The primary effect of Earth’s stable tilt is the planet’s entire surface receiving relatively uniform and predictable energy levels.

The primary heat dynamic on Earth’s surface is that the oceans near the equator are heated by sunlight and entropy spreads the heat toward the poles via oceanic currents.  Today’s continental configuration, with three major oceans besides the polar ones, has seen a global current develop that takes water 1,600 years to travel.  Where the Atlantic Ocean meets the polar oceans, the warm surface currents cool and sink to the ocean’s bottom, which is how the oceans are oxygenated.  Without that oxygenation, there would be little life on the ocean floor or much below the surface; almost the entire global ocean would be lifeless.  Before the GOE, this was certainly the case, but relatively recent hypotheses make the case that the oceans were anoxic for more than a billion years after the GOE began, largely because of the continental configurations and geophysical and geochemical processes. 

Many people are familiar with the term Pangaea, which was all of today’s continents merged into a supercontinent.  Pangaea formed about 300 mya, but it was not the only supercontinent; it was just the only one existing during the eon of complex life.  One called Rodinia may have existed one bya and did not break up until 750 mya (and reformed into another supercontinent, Pannotia, 600 mya, which did not break up until 550 mya), and there is a hypothesized earlier one called Columbia that existed two bya.  There is also a hypothesis that all continental mass was contained in one supercontinent that lasted from 2.7 bya to 600 mya.  The continental land masses of two bya may have been only about 60% the size of today’s.[103]  Supercontinents are generally associated with ice ages

When the total continental land mass was small or combined into a supercontinent, there was no land to divert that diffusion of warm water toward the poles, which results in currents.  During those times, the global ocean became one big, calm lake, with no currents of significance.  Those oceans are called Canfield Oceans today, and they would have been anoxic; the oxygenated surface waters would not have been drawn by currents to the ocean floor, and the oceans were certainly anoxic before the GOE.  The interplay of those many interacting dynamics can be incredibly complex and lead to the multitude of hypotheses posited to explain those ancient events, but a leading hypothesis today is that a combination of factors, including supercontinents, variations in volcanic output, Canfield Oceans, and ice ages prevented eukaryotic life from gaining ecosystem dominance until the waning of the second Snowball Earth event, which was the greatest series of glaciations that Earth has yet experienced.  It is known today as the Cryogenian Period, which ended about 635 mya.  The study of the Cryogenian Period, which is the subject of this essay’s next chapter, resulted in the term “Snowball Earth.”

All animals, except for some tiny ones in anoxic environments, use aerobic respiration today, and early animals (multicellular heterotrophs, which are called metazoans today) may have also used aerobic respiration.  Before the rise of eukaryotes, the dominant life forms, bacteria and archaea, had many chemical pathways to generate energy as they farmed that potential electron energy from a myriad of substances, such as hydrogen sulfide, sulfur, iron, hydrogen, ammonia, and manganese, and photosynthesizers got their donor electrons from hydrogen sulfide, hydrogen, arsenate, nitrite, and other chemicals.  If there is potential energy in electron bonds, bacteria and archaea will often find ways to harvest it.  Many archaean and bacterial species thrive in harsh environments that would quickly kill any complex life, and those hardy organisms are called extremophiles.  In harsh environments, those organisms can go dormant for millennia and perhaps longer, waiting for appropriate conditions (usually related to available energy).  In some environments, it can take a hundred years for a cell to divide.

But once the GOE reached the level where eukaryotes could reliably power their respiration aerobically, then virtually all complex life went “all in” with aerobic respiration, and all plants engage in oxygenic photosynthesis.  The conventional view has long been that the GOE was a microbe holocaust, as most anaerobic microbes died from oxygen damage.  However, there is little evidence for a holocaust.  Today, it looks more like the anaerobes were driven to the margins where oxygen is scarce (underground, and in some anoxic waters such as today’s Black Sea) while aerobes quickly came to dominate the planet.[104]  Once the oxygenic photosynthesis and aerobic respiration regime was achieved around two bya, the cycle of photosynthesizers creating oxygen and aerobes eating it began.  Atmospheric carbon dioxide and oxygen levels have seesawed ever since the beginning of the eon of complex life and probably earlier.  For instance, the coal beds that humanity is mining and burning with such abandon today were created because trees produced lignin that allowed them to grow tall, and it took about 100 million years for a fungus to learn how to break lignin down, and like the other big events, that trick was probably only learned once.  Consequently, carbon got buried with those trees in immense amounts and eventually formed most of Earth’s coal beds.  That time is known as the Carboniferous Period, and all of that carbon sequestered in Earth led to skyrocketing oxygen levels, the highest that Earth has yet seen.  Over the billions of years since oxygenic respiration began, aerobes have consumed 99.99% of all the oxygen created by oxygenic photosynthesis.  That remaining 0.01% was buried into Earth’s crust and is responsible for the generally declining atmospheric carbon dioxide levels.  It has been estimated that there is 26,000 times more organic carbon buried in Earth’s crust than exists in today’s biosphere.[105]

The times between 1.8 bya and 800 mya are called “the boring billion years” in scientific circles, because there were no dramatic evolutionary events that left a fossil record, and it was likely because the oceans were largely anoxic and rich in hydrogen sulfide, which prevented eukaryotes from attaining dominance.[106]  It is also speculated that a shortage of molybdenum, which bacteria use to fix nitrogen, may have contributed.

During that “boring” time before complex life appeared, key biological events happened that were critical for the later appearance of complex life, and some of them follow.  About 1.5 bya, eukaryotic organisms are clearly seen in the fossil strata, but are simple spheroids and tubes.[107] 

About 1 bya, stromatolites began to decline and microbial photosynthesizers began to evolve spines, probably due to predation pressure from protists, which are eukaryotes.  Eating stromatolites may reflect the first instance of grazing, although grazing is really just a form of predation.  The difference between grazing and predation is the prey.  If the prey is an autotroph (it fixes its own carbon, by using energy from either sunlight capture or harvesting the energy potential of inorganic chemicals), it is called grazing, and if the prey got its carbon from eating autotrophs (such creatures are called heterotrophs), then it is called predation.  There are other categories of life-form consumption, such as parasitism and detritivory (eating dead organisms), and there are many instances of symbiosis.  For complex life, the symbiosis between the mitochondrion and its cellular host was the most important one ever.

Just as mitochondria were “invented,” somewhere between 1.6 bya and 600 mya a eukaryote ate a cyanobacterium and both survived, and that cyanobacterium became the ancestor of all chloroplasts, which is the photosynthetic organelle in all plants.[108]  As with similar previous events, it appears that it happened only once, and all plants are descended from that unique event.[109]  The invention of the chloroplast quickly led to the first multicellular eukaryotes, algae, which were the first plants.  The first algae fossils are from about 1.2 bya.[110]  Most algae species are not called plants, as they are not descended from that instance when a eukaryote ate a cyanobacterium.  The non-plant algae, such as kelp, also have chloroplasts, from various “envelopment” events when algae chloroplasts were eaten and the grazers and chloroplasts survived.  Below is the general outline of the tree of life today, in which bacteria and archaea combined to make eukaryotic cells, and in which the bacterium enveloped into a protist to make plants, and all complex life developed from protists.  (Source: Wikimedia Commons)

Since mitochondria are the energy generation centers in eukaryotic cells (some eukaryotes lost their mitochondria, usually because the mitochondria evolved into other organelles such as mitosomes and hydrogenosomes), they present similar issues related to how industrialized humanity generates energy today.  Power plants have pollution issues and can explode and create environmental catastrophes such as what happened at Chernobyl and Fukushima

A free radical is an atom, molecule, or ion with an unpaired valence electron or an unfilled shell, and thus seeks to capture an electron.  The electron transport chain used to create ATP in a mitochondrion leaks electrons, which creates free radicals, which will take that electron from wherever they can get it.  Aerobic respiration creates some of the most dangerous free radicals, particularly the hydroxyl radical.  The more hydroxyl radicals created, the more damage inflicted on neighboring molecules.  Another free radical created by that electron leakage is superoxide, which can be neutralized by antioxidants, but there is no avoiding the damage produced by the hydroxyl radical.[111]  Those kinds of free radicals are called reactive oxygen species (“ROS”).  ROS are not universally deleterious to life processes, but if their production spins out of control, the oxidative stress inflicted by the ROS can cripple biological structures.  ROS damage can cause programmed cell death, called apoptosis, which is a maintenance process for complex life.  Antioxidants are one way that organisms defend against oxidative stress, and vitamin C is a standard antioxidant.  Antioxidants usually serve multiple purposes in cellular chemistry, and antioxidant supplements generally do not work as advertised.  They not only do not target the reactions that might be beneficial to prevent, but they can interfere with reactions that are necessary for life processes.  Antioxidant supplements are blunt instruments that can cause more harm than good.[112] 

There is plenty of uncertainty and controversy regarding just how connected the issue may be, but it appears that keeping some DNA at the mitochondria, in order to have more efficient and flexible energy generation, helped lead to the genetic phenomenon known as sexual reproduction.  Bacteria swap DNA in reproduction and have done so since life’s early days, but the process of meiosis, which is when two parent life forms split and recombine their DNA to produce an offspring, is unique to eukaryotes, and that form of reproduction appeared between 1.2 and 1.0 bya.  As with other seminal events, it seems that sexual reproduction using meiosis happened once, and all eukaryotes that reproduce sexually are descended from that one instance.  Protists were the first organisms to reproduce sexually

Again, the dates for these events are rather rough, but if the creation of a chloroplast happened once and the creation of sexual reproduction happened once, then sexual reproduction would have needed to come before the chloroplast, as many plants produce sexually.  If it turns out that the chloroplast really is 1.6 billion years old, then the current date for sexual reproduction would need to be pushed back, or the “sex was invented once” idea would have to be discarded, and biologists would probably decide that the date of sex appearing would need to be pushed back, even without fossil evidence of it.

Many principles of evolutionary theory have not changed much since Darwin, and one of them is that when one species gains the “upper hand” in the struggle of life on Earth, as there is only so much sunlight and nutrients to go around, the losers become marginalized or go extinct.  Ultimately, the species with the highest carrying capacity, or ability to extract energy from its environment, wins.[113]  There are many ways, however, to attain that winning carrying capacity.  Another Darwinian concept is that species adapt to their environments (which include other species) to benefit that species, not any other (and Darwin used the concept at the organism level, not the species level).  Darwin’s idea that all life on Earth descended from a common ancestor is a central feature of evolutionary theory.  But Darwin’s idea of gradual changes leading to speciation is confounded by the appearance of mitochondria, which led to complex life.  There was nothing gradual about an archaean swallowing a bacterium and both surviving, and the bacterium eventually became the power plant for all animals.  It was a radical change and a chasm between simple and complex life.[114]

Another evolutionary concept is that all changes had mechanical reasons for happening (again, today’s science has nothing to say about any intent), and each mechanical change required some purpose in improving an organism’s chances of surviving to reproduce, or at least not have unduly impaired it.  As evolution progressed, for each species, it was like taking a road, and the farther down the road a species went in its development, the “lifestyle” opportunities that its biological operation created precluded other kinds of styles.  For instance, trees will never become Ents.  Trees went down the path of roots, lignin, growing taller than their neighbors, and the like.  A plant cannot choose locomotion as a way of life.  It does not generate enough energy for it, for one thing.  Animals went down a very different evolutionary path than plants did, and muscles, brains, livers, and the like have no analogy in plants and, by themselves, plants will not grow muscles or brains anytime soon, although humans have been making radical changes in animals over brief periods of time, such as the many breeds of dog.[115]

The nutrient cycling that life contributes to, and the oxygen that is generated that maintains the ozone layer, was all initially performed by prokaryotes, and will continue to be performed by them long after complex life goes extinct.  Complex life is largely unnecessary for making Earth inhabitable.  Microbes do not need them.[116]  Earth’s biomass today is about half prokaryote and half eukaryote.

During that “boring billion years,” sexual reproduction was invented, plants became possible, and the rise of grazing and predation had eonic significance.  While many critical events in life’s history were unique, one that is not is multicellularity, which independently evolved dozens of times, and some prokaryotes have multicellular structures, some even with specialized organisms forming colonies.[117]  There are various hypotheses to explain why life went multicellular, but the primary advantage was size, which would become important in the coming eon of complex life.  The rise of complex life might have happened faster than the billion years or so after the basic foundation was set (the complex cell, oxygenic photosynthesis), but geophysical and geochemical processes had their impacts.  Perhaps most importantly, the oceans probably did not get oxygenated until just before complex life appeared, as they were sulfidic Canfield Oceans from 1.8 bya to 700 mya.  Atmospheric oxygen is currently thought to have remained at only a few percent at most until about 850 mya, although there are recent arguments that it remained low until only about 420 mya, when large animals began to appear and animals began to colonize land.[118]  Just as the atmospheric oxygen content began to rise, then came the biggest ice age in Earth’s history, which probably played a major role in the rise of complex life.


The Cryogenian Ice Age and the Rise of Complex Life

Earth’s Major Ice Ages

Major Ice Age


Impact on Ecosphere

Suspected Primary Cause(s)


c. 2.4 to 2.1 bya

Perhaps little – only prokaryotes existed.

Early stage of Great Oxygenation Event.


c. 850 to 635 mya

Perhaps great – life may have been nearly extinguished, and rise of complex life followed Cryogenian.

Supercontinent breakup and resultant runaway effects.


c. 460 to 420 mya

Caused the first great mass extinction.

Gondwana drifted over the South Pole.


c. 360 to 260 mya

Destroyed Earth’s first rainforests and resulted in a mass extinction that led to the rise of reptiles.

Carbon sequestering by rainforests and Gondwana at South Pole.


c. 2.5 mya to present

Growing and retreating ice sheets led to cooling and drying, warming and moistening phases.

The ultimate cause is declining carbon dioxide levels.  The first proximate cause was probably Antarctica covering South Pole and becoming isolated.  The second proximate cause was probably the formation of a land bridge between the Americas.  The third proximate cause is variation in Earth’s orientation to the Sun

Reconstruction of supercontinent Rodinia at 1.1 bya (Source: Wikimedia Commons)

Chapter summary:

This chapter will provide a somewhat detailed review of the Cryogenian Ice Age and its aftermath, including some of the hypotheses regarding it, evidence for it, and its outcomes, as the eon of complex life arose after it.  The Cryogenian Period ran from about 850 mya to 635 mya.  This review will sketch the complex interactions of life and geophysical processes, and the increasingly multidisciplinary methods being used to investigate such events, which are yielding new and important insights.

The idea of an ice age is only a few hundred years old, and was first publicly proposed as a scientific hypothesis by Louis Agassiz in 1837, who got his first ideas from Karl Schimper and others.[119]  There had also been proposals for ice ages in the preceding decades.  By the 1860s, most geologists accepted the idea that there had been a cold period in Earth’s recent past, attended by advancing and retreating ice sheets, but nobody really knew why.[120]  Hypotheses began to proliferate, and in the 1870s, James Croll proposed the idea that variations in Earth’s orientation to the Sun caused the continental ice sheets.  Because of problems in matching his hypothesis with dates adduced for ice age events, it fell out of favor and was considered dead by 1900.[121]  Croll’s work regained its relevance with the publication of a paper by Milutin Milanković (usually spelled Milankovitch in the West) in 1913, and by 1924, Milankovitch was widely known for explaining the timing of advancing and retreating ice sheets during the current ice age.[122] 

The book that made Milankovitch famous (Croll’s work is still obscure, even though Milankovitch gave full credit to Croll in his work) was co-authored by Alfred Wegener, who a decade earlier first published his hypothesis that the continents had moved over the eons.  As is often the case with radical new hypotheses, aspects of it previously existed in various stages of development, but Wegener was the first to propose a comprehensive hypothesis to explain an array of detailed evidence.  Wegener was a meteorologist working outside of his specialty when he proposed his “continental drift” hypothesis.  His hypothesis was harshly received and dismissed by the day’s orthodoxy, and Wegener died in 1930 while setting up a research station on Greenland’s ice sheet.  His continental drift hypothesis quickly sank into obscurity.  It was not until my lifetime, when paleomagnetic studies confirmed his views, that Wegener’s work returned from exile and plate tectonics became a cornerstone of geological theory.  Ice age data and theory does not pose an immediate threat to the global rackets or "national security," so the history of developing the data and theories has been publicly available. 

Wegener concluded, based on his gathered evidence, that there was a global ice age in the Carboniferous and Permian periods.  He was right.[123]  Nearly 50 years later, in 1964, the same year that the first symposium of the plate tectonic era was held, Brian Harland proposed, based on paleomagnetic evidence, that a global ice age immediately preceded the Cambrian Period, when even the tropics were buried under ice.  That was the first time that a truly global glaciation was proposed, and Harland’s idea developed into what is today called the Snowball Earth hypothesis.

Ice ages are an important realm of scientific investigation.  Humanity’s colossal burning of Earth’s hydrocarbon deposits may well be delaying the ice sheets' return; they have been advancing and retreating in rhythmic fashion for the previous million years.[124]  Today, the current pattern's accepted tipping point has been Earth’s orientation toward the Sun, particularly the eccentricity of Earth’s orbit, which has a roughly 100,000-year cycle.  Although Earth’s orientation is universally considered to be the tipping point variable, it is not the only influence.  The ultimate cause has been steadily declining atmospheric carbon dioxide levelsAntarctica began developing its ice sheets about 35 mya due to its position near the South Pole and declining carbon dioxide levels.  The current ice age began 2.5 mya and was likely initiated by the formation of Panama’s isthmus three mya, which separated the Atlantic and Pacific oceans and radically altered oceanic currents.  Also, the Arctic Ocean is virtually landlocked.  Those factors all contributed to the current ice age.

When investigating how ice ages begin and end, positive and negative feedbacks are considered.  A positive feedback will accentuate a dynamic and a negative feedback will mute it.  In the 1970s, James Lovelock and the author of today’s endosymbiotic theory, Lynn Margulis, developed the Gaia hypothesis, which posits that Earth has provided feedbacks that maintain environmental homeostasis.  Under that hypothesis, environmental variables such as atmospheric oxygen and carbon dioxide levels, ocean salinity levels, and Earth’s surface temperature have been kept relatively constant by a combination of geophysical, geochemical, and life processes, which have maintained Earth’s inhabitability.  The homeostatic dynamics were mainly negative feedbacks.  If positive feedbacks dominate, then “runaway” conditions happen.  In astrophysics, runaway conditions are responsible for a wide range of phenomena.  A runaway greenhouse effect may be responsible for the high temperature of Venus’s surface.  Climate scientists today are concerned that burning the hydrocarbons that fuel the industrial age may result in runaway climatic effects.  Mass extinctions are the result of Earth's becoming largely uninhabitable by the organisms existing during the extinction event.  The ecosystems then collapse as portions of the food chains go extinct.  Mass extinction specialist Peter Ward recently proposed his Medea hypothesis as a direct challenge to the Gaia hypothesis. 

Gaian and Medean dynamics have both played roles in the development of Earth and its biosphere, and positive and negative feedbacks have had impacts.  Life saved Earth’s oceans with its negative feedback on hydrogen's loss to space, without which life as we know it on Earth probably would not exist.  But there is also evidence that life contributed to mass extinction events.

Investigating the Cryogenian Ice Age led to finding evidence of runaway effects causing dramatic environmental changes, and the Cryogenian Ice Age’s dynamics will be investigated and debated for many years.  The position of Antarctica at the South Pole and the landlocked Arctic Ocean have been key variables in initiating the current ice age, and another continental configuration that could contribute to initiating an ice age is when a supercontinent is near the equator, which was the case during the Cryogenian Ice Age and the one in the Carboniferous and Permian periods.  A hypothesis is that Canfield Oceans can accompany supercontinents, so warm water is not pushed to the poles as vigorously.[125]  A supercontinent near the equator would not normally have ice sheets, which means that silicate weathering would be enhanced and remove more carbon dioxide than usual.  Those conditions could initiate an ice age, beginning at the poles.  It would start out as sea ice, floating atop the oceans. 

Around when Harland first proposed a global ice age, a climate model developed by Russian climatologist Mikhail Budyko concluded that if a Snowball Earth really happened, the runaway positive feedbacks would ensure that the planet would never thaw and become a permanent block of ice.[126]  For the next generation, that climate model made a Snowball Earth scenario seem impossible.  In 1992, a Cal Tech professor, Joseph Kirschvink, published a short paper that coined the term Snowball Earth.  Kirschvink sketched a scenario in which the supercontinent near the equator reflected sunlight, as compared to tropical oceans that absorb it.  Once the global temperature decline due to reflected sunlight began to grow polar ice, the ice would reflect even more sunlight and Earth’s surface would become even cooler.  This could produce a runaway effect in which the ice sheets grew into the tropics and buried the supercontinent in ice.  Kirschvink also proposed that the situation could become unstable.  As the sea ice crept toward the equator, it would kill off all photosynthetic life and a buried supercontinent would no longer engage in silicate weathering.  Those were two key ways that carbon was removed from the atmosphere in the day's carbon cycle, especially before the rise of land plants.  Volcanism would have been the main way that carbon dioxide was introduced to the atmosphere (animal respiration also releases carbon dioxide, but this was before the eon of animals), and with two key dynamics for removing it suppressed by the ice, carbon dioxide would have increased in the atmosphere.  The resultant greenhouse effect would have eventually melted the ice and runaway effects would have quickly turned Earth from an icehouse into a greenhouse.  Kirschvink proposed the idea that Earth could vacillate between icehouse and greenhouse states. 

Kirschvink noted that BIFs reappeared in the geological record during the possible Snowball Earth times, after vanishing about a billion years earlier.  Kirschvink noted that iron cannot increase to levels where they would create BIFs if the global ocean was oxygenated.  Kirschvink proposed that the sea ice not only killed the photosynthesizers, but it also separated the ocean from the atmosphere so that the global ocean became anoxic.  Iron from volcanoes on the ocean floor would build up in solution during the icehouse phase, and during the greenhouse phase the oceans would become oxygenated and the iron would fall out in BIFs.  Other geological evidence for the vacillating icehouse and greenhouse conditions was the formation of cap carbonates over the glacial till.  It was a global phenomenon; wherever the Snowball Earth till was, cap carbonates were atop them.  In geological circles, carbonate layers deposited during the past 100 million years are considered to be of tropical origin, so scientists think that the cap carbonates reflected a tropical environment.  The fact of cap carbonates atop glacial till is one of the strongest pieces of evidence for the Snowball Earth hypothesis.  Kirschvink finished his paper by noting that the eon of complex life came on the heels of the Snowball Earth, and scouring the oceans of life would have presented virgin oceans for the rapid spread of life in the greenhouse periods, and this could have initiated the evolutionary novelty that led to complex life.

Kirschvink is a polymath, was soon pursuing other interests, and left his Snowball Earth musings behind.[127]  Canadian geologist Paul Hoffman had been an ardent Arctic researcher, but a dispute with a bureaucrat saw him exiled from the Arctic.[128]  He landed at Harvard and soon picked Precambrian rocks in Namibia to study, as it was largely unexplored geological territory.  The Namibian strata were 600-700 million years old, instead of the two billion years that Hoffman was familiar with.  In the Namibian desert, he soon found evidence of glacial till among what were considered tropical strata when created.

Glacial till is composed of “foreign” stones that had been transported there by ice.  When ice ages were first conceived, a key piece of evidence was “erratics,” which were large stones found far from their place of origin.  Erratics found in ocean sediments are called dropstones.  Eventually, after plenty of controversy, scientists decided that erratics had usually been deposited by glaciers.[129]  Oceanic dropstones were deposited by melting icebergs and the land-based erratics by retreating glaciers.

Hoffman’s team tested the carbon-13/12 ratios of the cap carbonates and found them to be lifeless.  That was key evidence presented in their 1998 paper that supported Kirschvink’s Snowball Earth hypothesis.[130]  As Kirschvink did, Hoffman and his colleagues argued that BIFs were evidence of Snowball Earth conditions, and they concluded their paper as Kirschvink did, by stating that the alternating icehouse and greenhouse periods would have produced extreme environmental stress on the ecosystems and may well have led to the explosion of complex life in their aftermath.  A few months after publication of the Hoffman team’s paper came another seminal paper, by Donald Canfield.[131]  Those papers resulted in a flurry of scientific investigations and controversy.  Hoffman engaged in feuds as Snowball Earth’s front man.  The Snowball Earth hypothesis has won out, so far.  There is a “Slushball Earth” hypothesis that states that the Cryogenian Ice Age was not as severe as Hoffman and his colleagues suggest, and there are other disputes over the Snowball Earth hypothesis, but the idea of a global glaciation is probably here to stay, with a great deal of ongoing investigation.  The record during the Cryogenian Ice Age shows immense swings in organic carbon burial, coinciding with the formation of late-Proterozoic BIFs.[132]  The Proterozoic Eon is the last one before complex life appeared on Earth. 

Canfield’s original hypothesis, which seems largely valid today, is that the deep oceans were not oxygenated until the Ediacaran Period, which followed the Cryogenian; the process did not begin until about 580 mya and first completed about 560 mya.[133]  The wildest carbon-13/12 ratio swing in Earth’s entire geological record begins about 575 mya and ends about 550 mya, and is called the Shuram excursion.[134]  Explaining the Shuram excursion is one of the most controversial areas of geology today, with numerous proposed hypotheses.  When the controversies are finally resolved, if they are resolved, the Shuram and Lomagundi excursions, even though they go in opposite directions, I suspect will likely be both related to the dynamics of ice ages and the rise of oxygen levels.  Ediacaran fauna, the first large, complex organisms to ever appear on Earth, also first appeared about 575 mya, when the Shuram excursion began.[135]  I strongly doubt that Earth’s first appearance of large complex life at the exact geological timescale moment of the wildest carbon-isotope swing in Earth’s history will prove to be a coincidence.  The numerous competing hypotheses regarding the Shuram excursion include:



Deep-ocean currents, taking atmospheric gases deep into the oceans as they do today, do not seem to have existed during supercontinental times, and atmospheric oxygen was likely only a few percent at most when the Cryogenian Period began.  Canfield’s ocean-oxygenation evidence partly came from testing sulfur isotopes.  As with carbon, nitrogen, and other elements, life prefers the lighter isotope of sulfur, and sulfur-32 and sulfur-34 are two stable isotopes that can be easily tested in sediments.  Canfield proposed that in pre-Cryogenian oceanic depths, sulfate-reducing bacteria, which are among Earth’s earliest life forms and produce hydrogen sulfide as its waste product, abounded.  Hydrogen sulfide gives rotten eggs their distinctive aroma and is highly toxic to plants and animals, as it disables the enzymes used in mitochondrial respiration.  Hydrogen sulfide would react with dissolved iron to form iron pyrite and settle out in the ocean floor, just as the iron oxide did that formed the BIFs.  The sulfate-reducing bacteria will enrich the sulfur-32/34 ratio by 3% and did so before the Cryogenian, but the Ediacaran iron pyrite sediments showed a 5% enrichment.  A persuasive explanation is recycling sulfur in the oceanic ecosystem, which can only happen in the presence of oxygen.[142]

Part of the hypothesis for skyrocketing oxygen levels during the late Proterozoic was that high carbon dioxide levels, combined with a continent that had been ground down by glaciers, and the resumption of the hydrological cycle, which would have vanished during the Snowball Earth events, would have created conditions of dramatically increased erosion, which would have buried carbon (the cap carbonates are part of that evidence) and thus helped oxygenate the atmosphere.  Evidence for that increased erosion also came in the form of strontium isotope analysis.  Two of strontium’s stable isotopes are strontium-86 and 87.  Earth’s mantle is enriched in strontium-86 while the crust is enriched in strontium-87, so basalts exposed to the ocean in the oceanic volcanic ridges are enriched in strontium-86 while continental rocks are enriched in strontium-87.  If erosion is higher than normal, then ocean sediments will be enriched in strontium-87, which analysis of Ediacaran sediments confirmed.  That evidence, combined with carbon isotope ratios, provides a strong indication of high erosion and high carbon burial, which would have increased atmospheric oxygen levels.[143]  There is other evidence of increasing atmospheric oxygen content during the late Proterozoic, such as an increase in rare earth elements in Ediacaran sediments.  Although there is still plenty of controversy, today's consensus is that the Cryogenian is when atmospheric oxygen levels began rising to modern levels, where they have largely stayed, although as this essay will later discuss, oxygen levels have varied widely since the late Proterozoic (from perhaps only a few percent to 35%). 

An increase in atmospheric oxygen usually meant a decline in carbon dioxide, which would have cooled the planet.  Recent data and models suggest that during the Cryogenian Period, global surface temperatures declined from around 40o C to around 20o C, and it has been below 30o C ever since, generally fluctuating between  25o C and 10o C.  Today’s global surface temperature of around 15o C is several degrees warmer than during the glacial periods of the current ice age but is still among the lowest that Earth has ever experienced, and is generally attributed to atmospheric carbon dioxide’s consistent decline during the past 100-150 million years

Paleontologists were lonely fossil hunters for more than a century, but in my lifetime they found allies in geologists, and with the rise of DNA sequencing and genomics, molecular biologists have provided invaluable assistance.  In 1996, a paper was published that created a huge splash in paleontological circles.[144]  Molecular biologists used the concept of the “molecular clock” of genetic divergence among various species.  Their work concluded that the stage was set for animal emergence hundreds of millions of years before they appeared in the fossil record, particularly during the Cambrian Explosion.  That paper initiated its own explosion of genetic research, and the current range of estimates has the genetic origins of animals somewhere between 1.2 bya and 700 mya, but this field is in its infancy and more results are surely coming.[145]  From an early optimism that molecular clocks could finely calibrate the timing of events, scientists have come to admit that “molecular clocks” do not reliably keep time.  Today, molecular evidence is used more to tell what happened than when.  The geological and archeological record is considered more accurate for dating, and that evidence is used for calibrating molecular evidence.  Even though “molecular clocks” keep far from perfect time, they are being used to do some timekeeping, when they can be bounded by other timing evidence, with a kind of interpolation of the data points.

In particular, the synergies of molecular biology and paleontology have identified the importance of Hox genes in early animals.  In bilaterally symmetric animals, Hox genes dictate body development and are effectively identical in a fly and a chicken, which diverged from their common ancestor nearly 700 mya.  Hox genes became an anchor in animal development and the basics are still unchanged after more than 600 million years.

In summary, today’s orthodox late-Proterozoic hypothesis is that the complex dynamics of a supercontinent breakup somehow triggered the runaway effects that led to a global glaciation.  The global glaciation was reversed by runaway effects primarily related to an immense increase in atmospheric carbon dioxide.  During the Greenhouse Earth events, oceanic life would have been delivered vast amounts of continental nutrients scoured from the rocks by glaciers, and the hot conditions would have combined to create a global explosion of photosynthetic life.  A billion years of relative equilibrium between prokaryotes and eukaryotes was ultimately shattered, and oxygen levels began rising during the Cryogenian and Ediacaran periods toward modern levels.  Largely sterilized oceans, which began to be oxygenated at depth for the first time, are now thought to have prepared the way for what came next: the rise of complex life. 

Fossils are created by undisturbed organism remains that become saturated with various chemicals, which gradually replace the organic material with rock by several different processes of mineralization.[146]  Few life forms ever become fossils but are instead consumed by other life.  Rare dynamics lead to fossil formation, usually by anoxic conditions leading to undisturbed sediments that protect the evidence and fossilize it.  Scientists estimate that only about 1%-2% of all species that ever existed have left behind fossils that have been recovered.  Geological processes are continually creating new land, both on the continents and under the ocean.  Seafloor strata do not provide much insight into life’s ancient past, particularly fossils, because the process recycles the oceanic crust in “mere” hundreds of millions of years.  The basic process is that, in the Atlantic and Pacific sea floors in particular, oceanic volcanic ridges spew out basalt and the plates flow toward the surrounding continents.  When oceanic plates reach continental plates, the heavier mafic (basaltic) oceanic plates are subducted below the lighter felsic (granitic) continental plates.  Parts of an oceanic plate were entirely subducted into the mantle more than 100 mya and left behind plate fragments.  On the continents, however, as they have floated on the heavier rocks, tectonic and erosional processes have not obliterated all ancient rocks and fossils.  The oldest “indigenous” rocks yet found on Earth are more than four billion years oldStromatolites have been dated to 3.5 bya, and fossils of individual cyanobacteria have been dated to 1.5 bya.[147]  There are recent claims of finding fossils of individual organisms dated to 3.4 bya.  The oldest eukaryote fossils found so far are of algae dated to 1.2 bya.  The first amoeba-like vase-shaped fossils date from about 750 mya, and there are recent claims of finding the first animal fossils in Namibia, of sponge-like creatures which are up to 760 million years old.[148]  Fossils from 665 mya in Australia might be the first animal fossils, and some scientists think that animals may have first appeared about one bya.  The first animals, or metazoans, probably descended from choanoflagellates.  The flagellum is a tail-like appendage that protists primarily used to move and it could also be used to create a current to capture food.  Flagella were used to draw food into the first animals, which would have been sponge-like.  When the first colonies developed in which unicellular organisms began to specialize and act in concert, animals were born, and it is currently thought that the evolution of animals probably only happened once.[149]  In interpreting the fossil record, there are four general levels of confidence: inevitable conclusions (such as ichthyosaurs were marine reptiles), likely interpretations (ichthyosaurs appeared to give live birth instead of laying eggs), speculations (were ichthyosaurs warm-blooded?), and guesses (what color was an ichthyosaur?).[150]

During the eon of complex life, the geologic time scale is divided by the distinctive fossils found in the sedimentary layers attributed to that time.  Before the eon of complex life (that ancient time before complex life first appeared, which represents about 90% of Earth’s existence so far, is called the Precambrian supereon today), fossils were microscopic and rare.  Over time, geophysical forces eradicate sedimentary layers, and for the earliest animals, their fossils are found in only a few places on Earth.  The first animal fossils of significance formed about 600 mya and are strange creatures to modern eyes.  They were first noticed in 1868 in Newfoundland, but the fledgling paleontological profession dismissed them, not recognizing them as fossils.[151]  In Namibia in 1933, those Precambrian fossils were again noted but given a Cambrian chronology because the day’s prevailing hypothesis placed the beginning of animal life during the Cambrian Explosion.  In 1946, in the Ediacara Hills in Australia, more such strange fossils were found in what were thought to be Precambrian rocks, but it was not until 1957, when those fossils were found in England, in rocks positively identified as Precambrian, that the first period of animal life, the Ediacaran, was on its way to recognition (it was not officially named the Ediacaran until 2004, for the first new period recognized since the 19th century).  In China, the Doushantuo Formation has provided fossils from about 635 mya to 550 mya, which covers the Ediacaran Period (c. 635 to 541 mya), and Ediacaran fossils have been found in a few other places.  Microscopic algae spores and animal embryos abound in Doushantuo cherts, and the spores look like little suns and other fanciful shapes.  Almost all of them went extinct within a few million years of appearing in the fossil record, for an “invisible” mass extinction.[152]  That mass extinction directly preceded the appearance of the first large organisms that Earth ever saw: Ediacaran fauna (also called “Ediacaran biota” in certain scientific circles, as there is debate whether those Ediacaran fossils were animal remains[153]). 

Early Ediacaran fossil finds were often dismissed as pseudofossils because they did not fit the prevailing idea of an animal or plant, and Dickinsonia left the most famous Ediacaran fossils.  Today, the most likely interpretation seems to be that Dickensonias flopped themselves down on bacterial mats and fed on them.  When one finished eating a mat, it flopped its way to another.  It was a bilateral-like creature and is today classified into an extinct phylum with other Ediacaran fauna.  It has reasonably been speculated that Dickinsonia got its oxygen through diffusion across its surface, and that oxygen levels had to be at least 10% of today's to achieve that.[154]  Charnia looked like a plant but almost certainly was not, and is classified into another extinct phylum.  Phyla are body plans, and Ediacaran fauna are indeed strange looking.  There is debate whether the Ediacaran fauna were plants, animals, or neither, and that debate will not end soon.  Spriggina resembled a trilobite and may have been its ancestor.  Paths in the sediments, called feeding traces, have been found, but there was no deep burrowing in the Ediacaran Period.  In the last few million years of the Ediacaran, the first skeletons appeared, particularly of Cloudinids.[155]  That characteristic Ediacaran fauna suddenly appeared in the fossil record about 575 mya and all abruptly disappeared about 542 mya.  Below are images of those Ediacaran forms, which can appear so bizarre to people today.  (Source for all images: Wikimedia Commons)

There has been controversy regarding why Ediacaran fauna quickly disappeared and even if their disappearance qualifies as a mass extinction.[156]  One idea is that their disappearance was due to predation by what became Cambrian fauna, and another is that they ate their food sources to extinction, but it appears more likely that it may have been an extinction brought on by anoxic oceans.  Cambrian fauna filled the vacant niches and then some when the ocean became oxygenated again.  Although Ediacaran fauna did not move much, their existence was probably owed to some oxygenation of the oceans, and although their metabolisms would have been slow compared to the animals that followed them, they may not have been able to survive in anoxic oceans.  Ediacaran anoxic events are also when the first Middle East oil deposits were formed.  The Proto-Tethys Ocean appeared in the Ediacaran, followed by the Paleo-Tethys and the Tethys, and those oceanic basins eventually all disappeared and their seafloors were subducted by colliding continents.  Those subducted basins became the primary source of Middle East oil, which are extracted from Earth’s most gigantic hydrocarbon deposits.

As with all “big idea” hypotheses such as those that gird the foregoing narrative of a global glaciation and rise of complex life, there are challenges aplenty coming from various corners, and some are:



Some hypotheses are stronger, others weaker, and some have already come and gone (and might be resurrected one day, as Birkeland’s hypothesis was?).  The coming generation of research may resolve most of these issues, but new ones will undoubtedly arise and there is obviously a long way to go before significant consensus will be reached on those ancient events.

Again, the purpose of this chapter's presentation is to cover, in some depth, the scientific process and the kinds of controversies and numerous competing hypotheses that can appear and to show how intersecting lines of evidence, brought from diverse disciplines and using increasingly sophisticated tools, are providing new and important insights, not only into the distant past, but which can also have modern-day relevance.

Readers for the collective task that I have in mind need to become familiar with the scientific process, partly so they can develop a critical eye for the kinds of arguments and evidence that attend the pursuit of FE and other fringe science/technology efforts.  For the remainder of this essay, I will attempt to refrain from referring to too many scientific papers and getting into too many details of the controversies.  Following my references will help readers who want to go deeply into the issues, and many of them are as deep and controversial as the Snowball Earth hypothesis and aftermath has proven to be, or attempts to explain the Shuram excursion.  These are relatively new areas of scientific investigation, partly due to an improved scientific toolset and ingenious ways to use them.  It is very possible that the controversies in those areas will diminish within the next generation as new hypotheses account for increasingly sophisticated data, and paradigmatic changes in the near future are nearly certain.  But science is always subject to becoming dogmatic and hypotheses can prevail for reasons of wealth, power, rhetorical skill, and the like, not because they are valid.  The history of science is plagued with that phenomenon, and probably will be as long as humanity lives in the era of scarcity.

As will become a familiar theme in this essay, the rise and fall of species and ecosystems is always primarily an energy issue.  The Ediacaran extinction is a good example: Ediacaran fauna either became an energy source for early Cambrian predators, ran out of food energy, ran out of the oxygen necessary to power their metabolisms, or lacked some other energy-delivered nutrient.  After the extinction events, biomes were often cleared for new species to dominate, which were often descended from species that were marginal ecosystem members before the extinction event.  They then enjoyed a golden age of relative energy abundance as their competitors were removed via the extinction event. 

For this essay’s purposes, the most important ecological understanding is that the Sun provides all of earthly life’s energy, either directly or indirectly (all except nuclear-powered electric lights driving photosynthesis in greenhouses, as that energy came from dead stars).  Today’s hydrocarbon energy that powers our industrial world comes from captured sunlight.  Exciting electrons with photon energy, then stripping off electrons and protons and using their electric potential to power biochemical reactions, is what makes Earth’s ecosystems possible.  Too little energy, and reactions will not happen (such as ice ages, enzyme poisoning, the darkness of night, food shortages, and lack of key nutrients that support biological reactions), and too much (such as ultraviolet light, ionizing radiation, temperatures too high for enzyme activity), and life is damaged or destroyed.  The journey of life on Earth has primarily been about adapting to varying energy conditions and finding levels where life can survive.  For the many hypotheses about those ancient events and what really happened, the answers are always primarily in energy terms, such as how it was obtained, how it was preserved, and how it was used.  For life scientists, that is always the framework, and they devote themselves to discovering how the energy game was played.


Speciation, Extinction, and Mass Extinctions

Earth’s Largest Mass Extinction Events

Major Extinction Event

Minor Extinction Event


Percent of Species or Genera that Went Extinct

Suspected Primary Cause(s)

Aftermath Dynamics


Microscopic organisms

May have happened numerous times before eon of complex life.


Changing sea temperatures and chemistry.

The last microscopic mass extinction directly preceded the rise of the first animals that could be seen with the naked eye.



c. 542 mya

Unknown, but almost all Ediacaran forms disappeared.


Cambrian Explosion



c. 517 mya

Unknown, but small shelly fauna largely disappear. 

Anoxia and changing sea levels.

Trilobite radiation



c. 502 mya

40% of marine genera


End of Golden Age of Trilobites, and brachiopods diminished.



c. 485 mya

Unknown, but half of trilobite species go extinct.  Might be regional, but could be a major mass extinction.

Rising sea levels and anoxia.

Ordovician radiation



c. 443 mya

c. 85% of all species

Temperature and sea level changes and anoxia.

Ecosystem functioning not fundamentally altered.



c. 433

50% of trilobite and 80% of conodont species in seafloor event.

Climate and sea level changes.  It was a late ice age event.  Chemistry and/or currents changes or anoxia.

Disaster taxa appear afterward, followed by recovery.



c. 427 mya

Seafloor communities devastated

Climate change, sea level changes, and anoxia.



c. 424 mya

Seafloor communities devastated

Climate change, sea level changes, and anoxia.

Late Devonian


c. 375 to 360 mya

c. 70% of all species

Series of extinctions.  Sea level changes and anoxia.  Mountain-building and volcanism could have triggered ice age that caused it.

Arthropod and vertebrate colonization of land halted for 14 million years.



c. 325 mya

Marine extinction

Sea level changes related to ice age and continental uplift related to continents colliding to form Pangaea. 

End of Mississippian and beginning of Pennsylvanian epochs of Carboniferous Period. 



c. 307 mya

Rainforest collapse

Ice age

The rise of reptiles.



c. 270 to 250 mya

c. 90-96% of all species

Series of extinctions.  Volcanism, warming, sea level changes, and anoxia.  Formation of Pangaea probably the ultimate cause.

Beginning of a new era. 



c. 230 mya

Ammonoid and conodont mass extinction.  Near-extinction of therapsids, and extinction of synapsid that would have been a dinosaurian competitor. 

Volcanism, mountain-building.

Dinosaurs begin to dominate, and mammals first appear several million years later.  Stony corals first appear.  Some argue that this extinction is more significant than the end-Triassic extinction.



c. 200 mya

c. 70-75% of all species

Volcanism, warming, sea level changes, and anoxia.

The dominance of dinosaurs.



c. 183 mya

Reefs and ammonites devastated.

Volcanism, anoxia.

Carbonate hardgrounds become common in calcite seas.



c. 145 mya

Reef collapse, bivalves had about a 20% extinction.

Falling sea levels

Cretaceous period rise of ornithischians.



c. 116 mya

Marine event.  Rudist bivalve domination temporarily halted.


Rudists subsequently dominated, displacing coral reefs.



c. 93 mya

Marine event which may have marked the final extinction of ichthyosaurs.  About 25% of marine invertebrate species went extinct.  Rudist reefs decline. 

Undersea volcanism, anoxia.

Biomes recovered largely unchanged, although world continued cooling for nearly the next 40 million years.



c. 66 mya

c. 75% of all species

Bolide impact, and perhaps also volcanism and sea level changes.

The end of dinosaurs and the rise of mammals.



c. 56 mya

Seafloor communities devastated, up to 50% of seafloor foraminifera species go extinct.

Volcanism, release of methane hydrates from ocean floor, possibly related to change in ocean current. 

Warmest epoch in hundreds of millions of years, and great radiations of mammals.  A Golden Age of Life on Earth



c. 50-49 to 38-37 mya

Warm-climate species migrate or go extinct.  Greatest mass extinction of Cenozoic Era so far.

Cooling related to transition from Greenhouse Earth to Icehouse Earth

Cold-adapted species dominate biomes. 



c. 34 mya

Half of European mammal genera, all early whales.

Migration of Asian mammals to Europe, Icehouse Earth conditions in oceans.

Relatively cold Oligocene Epoch begins.



c. 14.8-14.5 mya

Warm-climate species migrate or go extinct. 

Mountain-building and carbon sequestration due to silicate weathering.

Earth has not been as warm since then.  Miocene apes migrate back to Africa, which might include humanity’s ancestor.


Late Pliocene - Atlantic

c. 3.5 to 2.5 mya

65% of North American bivalve species, Florida’s reefs.

Closure of gap between Atlantic and Pacific Oceans between the Americas, and resultant Gulf Stream dynamics which may have initiated current ice age.

Current ice age in Northern Hemisphere.


Late Pliocene

c. 3.0 to 2.7 mya

The majority of mammalian species. 

Land bridge to North America forms.

Mammals that migrated from North America dominate South American biomes. 



c. 50 kya to present

May reach 50% or higher by 2100 and maybe far sooner, and far higher later.  Eventual Permian-level extinction is possible.

Humanity, warming.

Future extinctions still preventable by humans, and humans can create a radically different aftermath.

Chapter summary:

In his Origin of Species, Charles Darwin sketched processes by which species appear and disappear, today called speciation and extinction.  Origin of Species is a landmark in scientific history and is still immensely influential.  But it was also afflicted by false notions that are still with us.  Europe’s emergence from dogma and superstition has been a long, fitful, only partially successful process.  In the 1500s, Spanish mercenaries read a legal document to the unfortunate Indians that they conquered and annihilated that stated that Creation was about five thousand years old, as scholars of the time simply added up the Book of Genesis’s “begats.”  The Old Testament is filled with tales of genocide, miracles, and disasters, with a global flood that the faithful Noah survived.  As geology gradually became a science and processes such as erosion and sedimentation were studied, the Judeo-Christian belief of Earth's being five thousand years old was discarded and the concept of geologic time arose in Europe.

In the early 19th century, a dispute was personified by Charles Lyell, a British lawyer and geologist, and Charles Cuvier, a French paleontologist.  Their respective positions came to be known as uniformitarianism and catastrophism.  Just as the British prevailed in their global imperial competition with the French, so did uniformitarianism prevail in scientific circles.  Under the comforting uniformitarian worldview, there was no such thing as a global catastrophe.  Changes had only been gradual, and only the present geophysical, geochemical, and biological process had ever existed.  The British Charles Darwin explicitly made Lyell’s uniformitarianism part of his evolutionary theory and he proposed that extinction was only a gradual process.  Cuvier was the first scientist to suggest that organisms had gone extinct, which contradicted the still-dominant Biblical teachings, even in the Age of Enlightenment.[167]  Although Cuvier did not subscribe to the evolutionary hypotheses that predated Darwin, his catastrophic extinction hypothesis was informed by his fossil studies.  But Lyell and Darwin prevailed.  Suggesting that there might have been catastrophic mass extinctions in Earth’s past was an invitation to be branded a pseudoscientific crackpot.  That state of affairs largely prevailed in orthodoxy until the 1980s, after the asteroid impact hypothesis was posited for the dinosaurs’ demise.[168]  An effort led by a scientist publishing outside of his field of expertise (a Nobel laureate in this instance) removed gradualism from its primacy.  Only since the 1980s have English-speaking scientists studied mass extinctions without facing ridicule from their peers, which has never been an auspicious career situation.  Since then, many minor and major mass extinction events have been studied, but the investigations are still in their early stages, partly due to a dogma that prevailed for more than a century and a half, and Lyell’s uniformitarianism is still influential.  The ranking of major mass extinctions is even in dispute, or even how they should be ranked, and a mid-Carboniferous Period extinction was recently argued as being greater than the Ordovician–Silurian extinction.[169]

Speciation has probably been more controversial than extinction.  To be fair to Darwin, genetics was not yet a science when Origin of Species was published in 1859.  It was not until the 1866 publication of an obscure paper by Silesian friar Gregor Mendel that the science of genetics began, but Mendel’s work was dismissed and ignored by mainstream science until the 20th century.  Darwin went to his grave unaware of Mendel’s work.  Today, speciation is primarily considered to be a genetic event.[170]  But similar to how proteins have several dimensions of structure that dictate their function, and emergent properties that appear at higher levels of complexity, the DNA code by itself does not explain life, although the popular Selfish Gene Hypothesis frames life and evolution as a competition between genes.[171]   

Before humans began to alter “natural” evolution with selective breeding, genetic engineering, and the like, speciation has been largely thought to be the result of populations becoming genetically isolated, primarily through geographic isolation, and isolated populations continue to evolve and adapt to their environments.  Eventually, the separated populations become separate species.  Even defining what a species is is still controversial, but the general concept is that if two sexually reproducing organisms can breed and produce fertile offspring, they are of the same species.  In light of evolutionary theory, human races are simply genetically isolated populations that have evolved as they adapted to their environments, but all races can interbreed, so humanity is a single species.  Recent DNA studies suggest that white skin is an evolutionary adaptation to northern climates, and white skin may be only six thousand years old, and blue eyes and light hair are similarly new and developed in the same vicinity.[172]  As Europe’s conquest of the world and subsequent Industrial Revolution have ended a great deal of genetic isolation, the adaptive differences seen in the races have been gradually disappearing as multiracial offspring have increased.  If humanity attains the FE epoch, “race” will disappear along with geographic isolation.

Liebig’s Law states that life can only grow as fast as its scarcest nutrient, and nutrient availability is clearly the limiting factor in many ecological situations.  In the oceans today, most marine life lives near land (99% of the global fish catch is caught near land), as nutrient runoffs from land feed oceanic ecosystems.  The runoff is seasonal and so is the fish catch, the deposition of marine sediments, and the like.  Nitrogen and phosphorus are two particularly critical nutrients; blooms and die-offs are based on those elements’ availability.  In the industrial age, with phosphorus and nitrogen artificially added in agriculture, the runoff has created great algal blooms (which create hypoxic “dead zones”) and other events, and even artificially introduced carbon is a suspected variable

Since the most dramatic instances of speciation seem to have happened in the aftermath of mass extinctions, this essay will survey extinction first.  A corollary to Liebig’s Law is that if any critical nutrient falls low enough, the nutrient deficiency will not only limit growth, but the organism will be stressed.  If the nutrient level falls far enough, the organism will die.  A human can generally survive between one and two months without food, ten days without water, and about three minutes without oxygen.  For nearly all animals, all the food and water in the world are meaningless without oxygen.  Some microbes can switch between aerobic respiration and fermentation, depending on the environment (which might be a very old talent[173]), but complex life generally does not have that ability; nearly all aerobic complex life is oxygen dependent.  The only exceptions are marine life which has adapted to varying levels of oxygen.  Birds can go where mammals cannot, flying over the Himalayas, for instance, or being sucked into a jet engine at several kilometers above sea level, due to their superior respiration system.  If oxygen levels rise or fall very fast, many organisms will not be able to adapt, and will die. 

Biologists consider extinctions to be due to failure to adapt to environmental changes, and the “environment” includes other organisms.[174]  Exactly how species go extinct is still poorly understood, but the idea that organisms that capture the most energy win the battle for survival is a common understanding among biologists, and they see ecosystems organized along the position in the food chain that each organism occupiesA popular model used for analyzing predator-prey relationships makes the relationship explicit.  There are many interacting variables, including those environmental nutrients, both inorganic and those provided by life forms.  The ability of an organism or species to adapt is partly dependent on how specialized it is and how unique its habitat is.  Absolute numbers, geographic distribution, position in the food chain (higher in the food chain is riskier), mobility, and reproductive rates all impact extinction risk.  During the Cambrian Period, about 80% of all animals were immobile.  Today, 80% of all animals are mobile.[175]  The immobile animals were at higher extinction risk, for obvious reasons.

The evolutionary game for a species is for enough of its members to survive long enough to produce viable offspring.  Organisms have adopted myriad survival and reproduction strategies, with astonishing diversity.  There are many ways to win or lose that game, but every species eventually loses.  More than 99.9% of all species that have ever lived on Earth became extinct.  A mammalian species has a life expectancy of around a million years, while a marine invertebrate species has one of about five-to-ten million years.  Today’s global extinction rate is more than 100 times the “normal” rate (“background rate”), and perhaps far greater, such as 10,000 times, due to human domination of the ecosphere.  The current rates could rival and equal the rates during the greatest mass extinction of all: the Permian extinction.[176]

There are “normal” extinction scenarios, and the “happy ending” extinction is when a species lives, evolves, and there comes a time when it would no longer be able to produce viable offspring by breeding with its ancestors.  There obviously would not be a “bright-line” demarcation for such an event, or any way to currently test such an event, but it has happened countless times.  Another normal extinction begins when a species splits into isolated populations, such as by tectonic plates moving away from each other.  Old World and New World monkeys became separated when monkeys from Africa migrated to South America, before the Atlantic Ocean grew to its present size.  Isolated populations of a species would continue to evolve and eventually could no longer interbreed, which made them different species by definition, and perhaps neither population could breed with its ancestors at the time the populations became separated.  Both populations might continue to thrive, but one might find itself in unfavorable conditions and go extinct while the other continued living.  If those isolated populations were still the same species, the population that went extinct would be called locally extinct, but if they were separate species, then the disappearing population would be a species extinction. 

Often, a species will not become extinct but its population will be reduced to small number, called a “bottleneck,” usually in refugia where they can ride out the storm, to have their population expand again when conditions improve.  That isolation can cause speciation but can also cause extinction.  When a bottleneck happens, the genetic diversity of the population will largely vanish, which can make it more vulnerable.  A bottlenecked population can go extinct, can speciate, can undergo an adaptive radiation when conditions improve, or can remain in its refugia and can become a living fossil.  The coelacanth is a living fossil that found and remained in such refugia, as it outlived all of its cousins by hundreds of millions of years.  Coelacanths have a similar strategy to the nautilus, which spends most of its time in its deep-water refugia, to rise at night to feed on the reefs, and all of its cousins long ago went extinct.  Humans seem to have gone through a bottleneck, as have many other animals alive today, most of which are in threatened status today.  In the Devonian extinctions, armored fish species were reduced by half during the first extinction event and the remaining population became bottlenecked.  From that bottlenecked situation, the second Devonian extinction event annihilated the remaining armored fish.

Scientists often measure extinction rates at the family and genus levels of the taxonomy; families and genera are far harder to kill off than species.  Some genera/families beat the odds and survived for hundreds of millions of years.  They are called living fossils, and usually all of their close relatives went extinct long ago.  The ubiquitous and lowly horsetail is a living fossil that first appeared nearly 400 mya.  There have been recent calls to retire the "living fossil" designation, as the survivors of their lines have evolved somewhat over the years.  However, it was not all that much, as they were very recognizable decedents of nearly identical-looking ancestors, and if those "living fossils" were graphically represented on the tree of life, they might instead be called the last leaves on their branch.  Perhaps "sole survivor" conveys the meaning better.  However scientists want to term it, the fact is that those "living fossils" have an ancient lineage, have not appreciably changed in millions of years, and the large "family" that they descended from all went extinct; their branch is bare except for them.  The survivors evolved since their close relatives died out, but there is nothing close to them on their branch of the tree of life.

Some kinds of organisms found great success with their strategies and they marginalized other kinds and even drove them to extinction, to only die off themselves in a mass extinction event, and the previously marginalized life forms flourished in the post-catastrophic biome.  The rise of mammals might have never happened without the dinosaurs’ demise.  Mass extinction events account for less than five percent of all species extinctions during the eon of complex life, but they had a profound impact on complex life’s history; the rise of mammals is only one of many radical changes.  Not only would a class of animals such as mammals thrive when their dinosaur overlords were gone, but the direction of mammal evolution was also influenced.  It took millions of years, even tens of millions of years, for ecosystems to approach their former level of abundance and diversity after a mass extinction event, and the new biomes could appear radically different from the pre-extinction biome.  The geologic periods in the eon of complex life usually have mass extinctions marking their boundaries.

Many assemblages of organisms had their “golden ages” in fresh biomes, then to cycle through a plateau, decline, and then finally suffer marginalization or extinction.  Sometimes the decline was relatively slow, with its ups and downs, and at other times it could be over in a flash, such as the dinosaurs’ exit. 

The extinction of Ediacaran fauna was the first mass extinction of organisms that could be seen with the naked human eye.  There was an extinction of microscopic eukaryotes soon before the eon of complex life began, and there may have been mass extinctions of microbes before then, but the evidence is so thin for anything before then that scientists may never know just how many mass extinctions there were.  However, bacteria and archaea, those biochemical wizards, can exist in environments far too harsh for complex life and those communities do not have the apparent instability of complex life’s food chains, so there may have been few mass extinctions in Precambrian times.  Cyanobacteria have not fundamentally changed in billions of years, which means that its mode of living has always worked well enough to ensure its survival.  No animals have anything close to such a lengthy pedigree.

Mass extinctions always have critical geophysical aspects to them, and often geochemical.  Continental shelves under shallow seas, which are home to most marine life, are vulnerable to sea level and oceanic current changes.  Stagnant waters, or waters that have too many nutrients dumped into them, can lose their oxygen, which triggers anoxic events that kill complex life.  A continental shelf exposed to the atmosphere by a falling sea level would obviously lose its marine life, and that marine life might have had nowhere else to go.  Sea levels can rise or fall for different reasons.  The most obvious reason has been advancing and retreating ice sheets, as water is removed from or added to the oceans, but the aggregate continental landmass has always grown (possibly sporadically), continents can rise and can fall during the journeys of their tectonic plates, and the ocean’s collective basin has fluctuated in size, usually falling as water was hydrated into rocks, and also falling when tectonic plates collide to form supercontinents and rising again as they fragmented.  Generally, when sea levels fell, the continental shelves lost their marine life, and when they rose, anoxic conditions often accompanied them.  There is evidence that the ozone layer has been periodically damaged, which stressed all plants and animals that the Sun directly shined on.[177]  The positions of the continents, both in relation to each other and their proximity to the equator or poles, can have dramatic effects, including impacts on global climate.  Global climate changes and moving continents can turn rainforests into deserts and vice versa.

There is also evidence that life itself can contribute to mass extinctions.  When the GOE eventually oxygenated the oceans, organisms that could not survive or thrive around oxygen (called obligate anaerobes) retreated to the anoxic margins of the global ocean and land.  When anoxic conditions appeared, particularly when Canfield Ocean existed, the anaerobes could abound once again, and when sulfate-reducing bacteria thrived, usually arising from ocean sediments, they produced hydrogen sulfide as a waste product.  Since the ocean floor had already become anoxic, the seafloor was already a dead zone, so little harm was done there.  The hydrogen sulfide became lethal when it rose in the water column and killed off surface life and then wafted into the air and asphyxiated life near shore.  But the greatest harm to life may have been inflicted when hydrogen sulfide eventually rose to the ozone layer and damaged it, which could have been the final blow to an already stressed ecosphere.  That may seem a fanciful scenario, but there is evidence for it.  There is fossil evidence of ultraviolet-light-damaged photosynthesizers during the Permian extinction, as well as photosynthesizing anaerobic bacteria (green and purple), which could have only thrived in sulfide-rich anoxic surface waters.  Peter Ward made this key evidence for his Medea hypothesis, and he has implicated hydrogen sulfide events in most major mass extinctions.[178]  An important aspect of Ward’s Medea hypothesis work is that about 1,000 PPM of carbon dioxide in the atmosphere, which might be reached in this century if we keep burning fossil fuels, may artificially induce Canfield Oceans and result in hydrogen sulfide events.[179]  Those are not wild-eyed doomsday speculations, but logical outcomes of current trends and growing understanding of previous catastrophes, proposed by leading scientists.  Hundreds of hypoxic dead zones already exist on Earth, which are primarily manmade.  Even if those events are “only” 10% likely to happen in the next century, that we are flirting with them at all should make us shudder, for a few reasons, one of which is the awesome damage that it would inflict on the biosphere, including humanity, and another is that it is entirely preventable with the use of technologies that already exist on the planet

Mass extinction events can seem quite capricious as to what species live or die.  Ammonoids generally outcompeted their ancestral nautiloids for hundreds of millions of years.  Ammonoids were lightweight versions of nautiloids, and they often thrived in shallow waters while nautiloids were banished to deep waters.  Both dwindled over time, as they were outcompeted by new kinds of marine denizens.  In the Permian and Triassic mass extinctions, deep-water animals generally suffered more than surface dwellers did, but the nautiloids’ superior respiration system still saw them survive.  Also, nautiloids laid relatively few eggs that took about a year to hatch, while ammonoids laid more eggs that hatched faster.  However, the asteroid-induced Cretaceous mass extinction annihilated nearly all surface life while the deep-water animals fared better, and nautiloid embryos that rode out the storm in their eggs were survivors.  The Cretaceous extinction wiped out the remaining ammonoids while nautiloids are still with us and comprise another group of living fossils, although that status is disputed in 2014.[180]  Lystrosaurus was about the only land animal of significance that survived the Permian extinction and it dominated the early Triassic landmass as no animal ever has.  It comprised about 95% of all land animals.  Why Lystrosaurus, which was like a reptilian sheep?  Nobody knows for sure, but it may have been the luck of the draw.[181]  Perhaps relatively few bedraggled individuals existed in some survival enclave until the catastrophe was finished, and then they quickly bred unimpeded until the supercontinent was full, for the most spectacular species radiation of all time, at least until humans arrived on the evolutionary scene. 

Many causes for mass extinctions have been suggested.  Cuvier speculated that extinctions might have regular periodicity, and other scientists have proposed that hypothesis.  Around 30 million years is the average time between mass extinctions, which set scientists speculating whether galactic dynamics could be responsible.  Gamma ray bursts from supernovas have been proposed as one possible agent, as have bolide events, but the periodicity hypothesis has fallen out of favor.[182]  The periodic nature of mass extinctions could be because it takes millions of years for complex ecosystems to recover from the previous extinction events and build themselves into unstable states again, when new events cause the ecosystems to collapse.[183]

Before the era of mass extinction investigation that began in the 1980s, a hundred hypotheses were presented in the scientific literature for the dinosaur extinction, but it was a kind of scientific parlor game.  Scientists from all manner of specialties concocted their hypotheses.[184]  But even during the current era of scientific study of mass extinctions, much is unknown or controversial and even the data is in dispute, let alone its interpretation.  Dynamics may have conflated to produce catastrophic effects, such as increasing atmospheric carbon dioxide concentration warming the land and oceans to the extent that otherwise stable methane in hydrates on the ocean floor and in permafrost would be liberated and escape into the atmosphere.  That situation is currently suspected to have contributed to the Permian, Triassic, and Paleocene-Eocene extinctions, as well as helping end the Cryogenian Ice Age.  Today, there is genuine fear among climate scientists that those dynamics might return in the near future, as global warming continues and hydrocarbons are burned with abandon, which could contribute to catastrophic runaway conditions.  Wise scientists admit that humanity is currently conducting a huge chemistry experiment with Earth, and while the outcomes are far from certain, the risk of catastrophic outcomes is very real and growing.

Recent environmental studies show that disturbed ecosystems can have cascading failures, as the removal of one part of a food chain can collapse the entire chain in cascading failures, and entire ecosystems can go extinct.  Cascades in today's world usually begin when the apex predator is removed (by humans, and called a trophic cascade[185]), but not always.  Those cascading events can happen in aquatic and terrestrial environments.  Food chains are essentially energy chains made possible by aerobic respiration, and the more complex they are, the more energy is required to sustain them.  The leading hypothesis for why complex civilizations collapse is also an energy-scarcity dynamic.  Also, the most compelling findings that I have encountered regarding degenerative disease in humans shows that if individual cells no longer have their nutritional needs met by the organism, they stop acting out their role as specialized cells and “go rogue.”  It may be difficult-to-impossible for scientists to reconstruct and test cascading failure hypotheses in ancient mass extinction events, but they may have played a major role in them, if not the dominant role.

Mass extinction events may be the result of multiple ecosystem stresses that reach the level where the ecosystem unravels.  Other than the meteor impact that destroyed the dinosaurs, the rest of the mass extinctions seem to have multiple contributing causes, and each one ultimately had an energy impact on life processes.  The processes can be complex and scientists are only beginning to understand them.  This essay will survey mass extinction events and their aftermaths in some detail, as they were critical junctures in the journey of life on Earth.

In 1972, Niles Eldridge and Stephen Jay Gould published their theory of punctuated equilibrium, which has generated plenty of controversy.  The basic idea is that species usually evolve slowly and even remain in a kind of stasis, except in exceptional times, when they can evolve relatively quickly.  Those exceptional times are often when new ecological niches become available, such as a new biological feature that allows exploitation of previously unavailable niches, or after an ecosystem is wiped clean by a mass extinction.  If a creature finds a way of life that works and it can keep exploiting/defending its unique niche, and the niche does not disappear, it can keep doing it for hundreds of millions of years without any significant changes, such as the horsetail, nautilus, and coelacanth have done.[186] 

Gene duplication is an important kind of genetic innovation that leads to speciation, which begins when a gene is duplicated, seemingly in error, and gets a “free ride,” like a spare part that never gets used.  The spare can then “experiment,” which can lead to a new and useful gene that perhaps codes for a new biological feature that enhances an organism’s ability to survive or reproduce.  About 15% of humanity‘s genes arose through gene duplication events, and in eukaryotes, gene duplication is around 1% per gene per million years.[187]  In the wake of mass extinctions, new species appear at high rates in what is called an adaptive radiation.  A leading hypothesis is that those post-extinction times allow for a golden age when life is easy, without the resource competition typical in more crowded biomes.  In such environments, organisms with duplicate genes and other genetic “defects” survive, and after long enough, those mutations become useful and lead to new species.  The most famous such adaptive radiation was the Cambrian Explosion, although its character was different from other radiations, when new body plans were invented as never before.[188]

Oxygen levels have fluctuated far more than temperature, ocean salinity, and pH have during the eon of complex life.  Peter Ward proposed that fluctuating atmospheric oxygen levels have not only contributed to mass extinction scenarios, but adapting to low oxygen levels has been a key stimulus for biological innovation.[189]  In summary, speciation is a reaction of organisms to challenge and opportunity which is eventually reflected in their DNA. 


The Cambrian Explosion

Timeline of Key Biological Innovations in the Eon of Complex Life



First bilaterally symmetric animals appear

c. 585-555 mya

The Cambrian Explosion

c. 541 mya

Eyes develop

c. 540 mya

First animals with specialized organs are probably worms

c. 540 mya

Arthropods appear

c. 540 mya

Mollusks appear

c. 540 mya

Teeth appear

c. 540-530 mya

Vertebrates appear

c. 530-525 mya

Fish appear

c. 530-525 mya

Reef ecosystems appear

c. 513 mya

Land plants appear

c. 470 mya

Land animals appear

c. 430-420 mya

Jaws appear

c. 420 mya

Bony fish appear

c. 420 mya

Vascular plants appear

c. 410 mya

Trees appear

c. 385 mya

Fish migrate to land

c. 375 mya

Seed-reproducing plants appear

c. 375 mya

Amphibians appear

c. 365 mya

Amniotes appear

c. 320-310 mya

Lignin-digesting organism appears

c. 290 mya

Dinosaurs appear

c. 243 mya

Mammals appear

c. 225 mya

Birds appear

c. 160 mya

Flowering plants appear

c. 160 mya

Placental mammals appear

c. 160 mya

Primates appear

c. 85-65 mya

Marsupials appear

c. 65 mya

Rodents appear

c. 65 mya

Elephants appear

c. 60 mya

Rhinoceroses appear

c. 55 mya

Horses appear

c. 52 mya

Camels appear

c. 50 mya

Monkeys appear

c. 45-40 mya

Whales appear

c. 41 mya

Bears appear

c. 38 mya

Apes appear

c. 35-29 mya

Deer appear

c. 35-30 mya

Canines appear

c. 34 mya

C4 plants appear

c. 32-25 mya

Felines appear

c. 25 mya

Kelp appears

c. 20 mya

Great apes appear

c. 14 mya

Stone tool created

c. 3.4-3.3 mya

Global temperatures during the eon of complex life (Source: Wikimedia Commons)

Global carbon dioxide concentrations during the eon of complex life (Source: Wikimedia Commons)

World map when Cambrian Period began (c. 540 mya) (Source: Wikimedia Commons) (map with names is here)


Chapter summary:

Until Ediacaran fossils were recognized for what they were, the Cambrian Period (c. 541 to 485 mya) was considered to have produced the earliest known fossils, and that situation vexed scientists from Darwin onward.[190]  If animals just came into existence from nothing, the Creationist arguments of Darwin’s time may have had some validity.  Darwin attributed the lack of Precambrian fossils to the geological record’s imperfection.  As this essay’s previous sections have shown, scientists have filled many gaps and Darwin’s theory has held up well. 

The Cambrian Period, however, is of eonic significance and still a source of great controversy.  The Cambrian Explosion was unique and the development of the first complex, modern-looking ecosystem.[191]  Although the Cambrian Explosion is the most spectacular event in the fossil record, it is questioned whether it was really an explosion at all, and many contenders for the “cause” of the explosion have been offered.  Various hypotheses fell by the wayside over the years, but the hunt for one “cause” may be futile.  One factor may have triggered its more dramatic manifestations, but several dynamics played their roles.  There are going to be proximate and ultimate causes for events such as that.  First and foremost, the Cambrian Explosion was about size, which was aided by oxygenating the seafloor, which interacted with developmental changes (from egg to adult) and new ecological relationships.[192]  The currently predominant hypotheses feature geophysical and geochemical processes interacting with biological ones.[193]  The increase in organism size that marked the rise of complex life is today thought to be a response to predation, which led to life’s “arms race.”[194]  The competition between organisms, locked in predator/prey, parasite/host, grazer/grazed dynamics, is thought to be behind a great deal of evolutionary innovation called coevolution, as organisms adapted to each other.  The Red Queen hypothesis posits that the constant battle between those competing life forms led to sexual reproduction and other innovations.[195]

During the Cambrian Explosion, an ecosystem developed in which life on the sea floor, surface, and water column all interacted for the first time.  All but one of the environmental factors currently and prominently considered were energy dynamics, as the environment provided either too much or too little energy, and the nutrient hypothesis (calcium in this case) will be revisited numerous times in this essay.  A lack of nutrients, mineral and otherwise, always meant that the energy-driven dynamics that delivered the nutrients were curtailed.  If enough energy is properly applied, all nutrients can be abundant.

Before the rise of humanity and industrial agriculture, the interplay of life, climate, and land masses created the seasonal runoffs that fed oceanic ecosystems.  However, during the Cambrian Explosion the land was largely barren, as life had yet to significantly invade land.  Also, continental shelves have always been key hosts for oceanic ecosystems, as sunlight could reach the seafloor and nutrients were closer to the surface.  When supercontinents broke apart or formed as the tectonic plates danced across Earth’s crust, shallow seas were often created, which were usually quite life-friendly.  Those ancient shallow seas and swampy continental margins have great importance to today’s humanity, as our fossil fuels were usually created there.  Earth’s coal beds were created in swampy floodplain conditions, usually near coasts, and the oil deposits were created by black shale and marlstone that formed in shallow anoxic waters.  The Tethys Ocean and its predecessors (1, 2) had a half-billion-year history that began in the Ediacaran, and the Tethys finally disappeared less than 20 mya.  The shallow margins of those tropical oceans, and the anoxic events that dotted the eon of complex life, formed most of today’s oil deposits, and particularly Middle East oil.  Numerous shallow tropical seas characterized the Cambrian Period.

The first skeletons appeared in the Ediacaran, and Cambrian Period skeletons became a key aspect of the coming arms race between predator and prey.  Food chains appeared in which about ten percent of an organism’s energy was transferred to the animal that ate it.  Unlike the internal skeletons that characterized fish, amphibians, reptiles, birds, and mammals, the first skeletons were external.  Hard shells protected from predation, and the bigger the animal, the more likely it would survive (but a bigger animal also meant a bigger energy windfall if it could be eaten).  But size presented immense challenges.  Similar to how complex cells needed to solve the energy generation and distribution problem before they could grow, increasing size presented numerous problems to early complex life.  How could a large organism supply energy and other nutrients to its cells?  Remove waste?  Move?  Life solved the problems by making structures and organs from specialized cells.  By the Cambrian Period’s end, animals had developed skeletons, gills, muscles, brains, circulatory systems, digestive and eliminative systems, nervous systems, respiratory systems, and internal organs which included eyes, livers, kidneys, etc.

Just as the aftermath of the appearance of complex life was uninteresting from a biochemical perspective, as the amazingly diverse energy-generation strategies of archaea and bacteria were almost totally abandoned in favor of aerobic respiration, biological solutions to the problems that complex life presented were greatest during the Cambrian Explosion, and everything transpiring since then has been relatively insignificant.  Animals would never see that level of innovation again.  While investigating those eonic changes, many scientists have realized that the dynamics of those times might have been quite different from today’s, as once again Lyell’s uniformitarianism may be of limited use for explaining what happened.[196]  Also, scientists generally use a rule-of-thumb called Occam’s Razor, or parsimony, which states that with all else being equal, simpler theories are preferred.  Karl Popper, a seminal theorist regarding the scientific method, preferred simpler theories as they were easier to falsify.  However, this issue presents many problems, and in recent times, theories of mass extinction or speciation have invoked numerous interacting dynamics.  Einstein noted that the more elegant and impressive the math used to support a theory, the less likely the theory depicted reality.  Occam’s Razor has also become an unfortunate dogma in various circles, particularly organized skepticism, in which the assumptions of materialism and establishment science are defended, and often quite irrationally.  Simplicity and complexity have been seesawing over the course of scientific history as fundamental principles.  The recent trend toward multidisciplinary syntheses has been generally making hypotheses more complex and difficult to test, although scientists’ improving toolset and ever-increasing and more precise data makes the task more feasible than ever, at least situations in which vested interests are not interfering.

Phyla consist of body plans, which scientists have used to classify all life forms, and all significant animal phyla had appeared by the Cambrian Period’s end.[197]  The Cambrian Explosion has been difficult to explain and there is still great controversy and many unanswered questions, and it has also been difficult to explain why significant change stopped after the explosion.  Once the basic body plans appeared and biomes were filled, new plans never appeared again.  Why did all fundamental change stop?  The emerging view is the same for why complex life went all in with aerobic respiration and never changed since then.  Not only could innovation confer great benefits, but once that path was embarked on, further travel along the developmental path made it continually less feasible to backtrack, start over, and take another path, or choose a fundamentally different path.  The history of life’s choices was reflected in organisms in several ways, and the source of that inertia began to be understood when biology and chemistry at the cellular and subcellular levels were investigated, particularly after DNA was sequenced and studied.  The fact that Hox genes have not significantly changed in several hundred million years points to the issue.  Hox genes have not changed because they control key developmental steps in embryonic development.  Not only do Hox genes work, there are no practical ways to significantly change them, as they lay the animal’s structural foundation.  Hox genes are called regulatory genes, and the nature of gene regulatory networks seems to be why animals have not fundamentally changed since the Cambrian Explosion.[198] 

Imagine a family having a custom home built and, after it was built, they decided that they wanted a basement, four extra stories, central gas heating rather than baseboard electric heating, and a swimming pool on the third floor.  It would not be feasible to renovate the home to give it those new features, especially if the family was already living in it.  They would need to build a new house from scratch, with a new foundation, and they would have to find a temporary home during the construction period.  But an animal has to live in its body all the time.  There is no way to redesign and rebuild an animal’s foundation while it lives in its body, and the biological superstructure built on the foundation was designed for that foundation.  A new superstructure would also have to be designed and built on the new foundation.  A six-chambered heart, for instance, could not just be invented and put into a human chest and work, or a second brain, or a third arm.  The kinds of changes that are feasible have to adhere to the basic structural and biochemical foundations that the phyla represent.

Once animals arrived on the evolutionary scene and filled most possible niches, new biological foundations could not be built, with superstructures built atop them, and hope to compete for resources that were already being consumed in the food chains.  Developing the original animal body plans took millions of years.  There were many other possible body plans that could have been developed in the early days of animals, which might have worked wonderfully, but those chosen ones worked well enough for survival and reproduction, and once chosen, there was no going back.  There really could not be, unless all animal life was wiped out and protists could start over, as they are the last common ancestors of animals (and eliminating all animal life would lead to great plant extinctions for starters, such as flowering plants).  The biological commitments to those basic modes of existence had their own inertia, and it starts at the root, with the DNA. 

The primary unit of taxonomic organization is a clade, which consists of a single ancestor and all of its descendants.  The study of body features has been augmented by recent findings in molecular biology.  Many organisms have had their cladistic classification changed, and many more will in the future.[199]  Many common features among diverse organisms are due to convergent evolution and not ancestry, as organisms independently developed similar solutions to life’s challenges. 

Ediacaran traces show that some animals were mobile before the Cambrian Explosion.  Sponges were probably the first animals, but they were immobile except for their flagella drawing water through them, which carried food and oxygen in and waste out.  The first creatures that we would recognize as animals were probably worms crawling atop ocean sediments.  As lowly as the worm might seem, it would have needed muscles, bilateral symmetry, a circulatory and digestive/excretory system, and a nervous system run by a brain; that distant ancestor probably possessed Earth’s first brain.[200]  Some early worms may have even had rudimentary eyes.  And of possibly eonic importance, worms probably made the first poop.  The evolution of feces-producing animals may have been a seminal event in the organic carbon burial process.  Sponges may have also been largely responsible for initially removing oceanic carbon, which helped increase atmospheric oxygen and helped ventilate the oceans.[201]  Until then, organic carbon from dead life forms would not have settled to the ocean floor, but would have floated in the water column and been recycled by other life forms.  Although the hypothesis is considered marginally valid today, feces sinking to the ocean floor may have been how life’s burial of carbon began, as well as robbing sulfate-reducing bacteria in the water column of their nutrients and thus enabling oceanic waters to remain oxygenated.[202]  Ediacaran fauna did not burrow into ocean sediments, but deep burrowing was characteristic of Cambrian sediments.  There is debate today whether Cambrian burrowing was a consequence or cause of oxygenating the ocean floor.

As with those small worms that crawled along and burrowed into the newly oxygenated seafloor (or helped oxygenate it), many small animals with shells and mineralized parts appeared in the late Ediacaran, and a misnomer was coined to account for them termed small shelly fauna.  Those small animals also thrived in the Cambrian, and many of them were ancestors to their larger descendants, which showed more intermediate steps in the “explosion.”[203]

The Cambrian Explosion’s iconic animal was the trilobite.  As a child, I read every paleontology text in my elementary school’s library, and I have fond memories of imagining trilobite lives.  Was there love among the trilobites?  Among the protists?  The bacteria?  To a scientist, those questions might be unanswerable and even meaningless, but a mystic might pursue them.  I will not wax too mystically in this essay (I do it elsewhere), but that may well be the big question of life on Earth and an enduring mystery to humanity.  The nature of consciousness and love in the Cambrian, or the lack thereof, as much as it may always be a mystery, does not invalidate life’s arc through the evolutionary process; it only challenges materialism.

Creationist critiques of the evolutionary corpus, which all-too-often attempt to portray the Book of Genesis as literally true, often use the eye as evidence of their Creationist notions.  The eye is too complex and function-specific to be some kind of evolutionary development, so goes Creationist reasoning.  Even Darwin confessed to the problems that eyes posed for his theory of natural selection, stating that the notion of eyes' being the product of natural selection seems “absurd.”[204]  However, the evolutionary path to the fully developed eye appears pretty clear to today’s scientists.[205]  Below is the current conception of the evolutionary path of eyes.  (Source: Wikimedia Commons)

Eyes began with pigments such as chlorophyll that captured photons that initiated electrical impulses through chemical cycles in a new kind of specialized cell: the nerve cell.  Neurons are energy hogs and “high-tension electric lines” in animals.  Human brain tissue uses ten times the energy that non-organ tissues elsewhere in the body do.  The first eyes probably only detected light, and perhaps even infrared light, so that organisms could remain the proper distance from life-giving/destroying volcanic vents, for instance.  Hydrothermal vent shrimp today have such infrared sensors, which can be likened to naked retinas.[206]  The development of an eye with a lens was not a great evolutionary leap from rudimentary eyes, and a recent calculation shows how eyes with lenses could have developed from scratch in about a half-million years of evolution.[207]  Protozoa may have had the first precursors to eyes.  Once the eye evolved, its benefit was overwhelmingly obvious, and virtually all animals that live where vision would help them have eyes.  Animals that adopted subterranean existences have lost their vision and even their eyes.  It is thought today that the development of eyes was a key innovation in the arms race that would soon characterize the eon of animals, and might have even triggered it.  The Pax6 gene is common to all animals with eyes.  As with those other early life events, that gene supports the widely accepted idea that vision evolved only once.[208]  The purpose of all senses is to detect environmental information, which is in turn processed by the brain.  Even brainless plants can detect light and modify their behavior, such as plants turning and growing toward sunlight

The first brains are considered to have appeared with early mobile animals, which were probably worms, but precursors to nervous systems exist in unicellular eukaryotes.  Experiments were performed long ago that showed that flatworms can learn.  Animal behavior began with protists, and protozoans have numerous behaviors, from predation and parasitism to defensive activities.  Even materialist philosophers have argued that atoms possess consciousness.[209]  If a worm can learn, it would seem to have consciousness of a sort.  Perhaps it is not as complex as mine or yours, but it surely seems to be consciousness.  Worms have brains and can learn.

The Cambrian Explosion marked the rise of arthropods, which may be the most successful animal body plan ever, which accounts for more than 80% of all animal species today.  Arthropods such as the trilobite left spectacular fossils, and were once thought to dominate the Cambrian Period, but in 1909 the Burgess Shale was discovered; it is one of the world’s most famous fossil beds.  The Burgess Shale preserved the soft parts of Cambrian organisms, which is very rare, and interest was renewed in the Burgess Shale in the 1960s, as the unique fossils coming from them began to be appreciated.  Mining the Burgess Shale for fossils will continue for the foreseeable future, and new and important findings are expected.  Recent finds in China and elsewhere have greatly improved scientific understanding of the Cambrian Explosion.

Grazing and predation far predated the Cambrian Explosion, and it took on new forms as animals became large.  Trilobites, for instance, rolled up like pill bugs to protect themselves from predators, and trilobites could be predators themselves.  The Burgess Shale produced the first complete fossil of Anomalocaris, which is a cousin of the bizarre-looking Opabinia, and Anomalocaris probably was the Cambrian Period’s apex predator, and Chinese specimens reached up to two meters in length; it was the leviathan of its time.  It is controversial whether Anomalocaris could have preyed upon armored arthropods or shellfish, as its mouth may have been unsuited for it.  But it might have grabbed trilobites and torn them apart, which may have led to their pill-bug defensive strategy.

An important evolutionary principle is organisms' developing a new feature for one purpose and then using that feature for other purposes as the opportunity arose.  As complex life evolved in the newly oxygenated seafloors, several immediate survival needs had to be addressed.  To revisit the hierarchy of nutrients that a human needs, if an oxygen-dependent animal did not have access to oxygen, it meant immediate death.  Obtaining oxygen would have been the salient requirement for early complex life that adopted aerobic respiration as its primary respiration process, which is how nearly all animals today respire.  While animals in low-oxygen environments have adapted to other ways of respiring (or perhaps never relinquished them in the first place), they are all sluggish creatures and would have quickly lost in the coming arms race.  Collagen, which is a critical connective tissue in animals, requires oxygen for its synthesis, and was one of numerous oxygen-dependencies that animals quickly adopted during the Cambrian Explosion.[210] 

Diffusion works for animals that are no more than a couple of millimeters thick, but for larger animals a respiration system was necessary.  The rise of the arthropods has been an enduring problem for paleobiologists.  Why was the arthropod so successful, particularly in the beginning?  Segmented animals dominated Cambrian seas, and segmentation provides for repeated features.  Segments obviously became important for locomotion but, for arthropods, segmentation appears to have conferred the more important advantage of distributed oxygen absorption.  Each trilobite leg had an attached gill, and leg motion constantly drew fresh oxygenated water over each gill.  Arthropods never developed the kinds of lungs that vertebrates have, or the pump gills of fish and other aquatic animals.  Early arthropods breathed by moving their legs.  Peter Ward’s recent hypothesis is that segments were first used for respiration, to provide a large gill surface area, and using the segments for locomotion came later.  For trilobites, the same functionality that pushed water over gills was also coopted for food intake.[211]  Also, the leg-mounted gill was necessary because of an arthropod’s body armor; oxygen could not be absorbed through tough exoskeletons. 

Every aerobic aquatic animal had to solve the problem of extracting oxygen from the water, and there was diversity in that accomplishment.  Key Cambrian animals such as sponges and corals had very high-surface-area-to-body-volume ratios, which allowed diffusion to provide their oxygen.  Immobile animals such as sponges and coral had to position themselves where oxygenated water flowed past or through them.  Sponges work like chimneys, designed to passively draw water through them.  The position and structure of reefs facilitated those oxygen-providing dynamics, so corals helped create the conditions that sustained them; the calcified exoskeletons of corals dissuaded predation and built the reefs. 

The Cambrian’s global ocean contained far less oxygen than today’s.  Being newly and probably inconsistently oxygenated by oceanic currents was only part of the problem.  The Cambrian oceans were warmer than today’s oceans, perhaps far warmer, such as 40o C and higher for the tropical ones.  Water’s ability to absorb oxygen declines as it gets warmer.  Water heated from 10o C to 40o C will lose 40% of its ability to absorb oxygen.  The phenomenon of warmer water absorbing less oxygen contributed to many instances of anoxic waters during the eon of complex life, and particularly in the warmer, earlier periods.

Members of another phylum, Brachiopoda, which superficially resemble clams, were successful in the Cambrian, but if their shells are opened, they look very different inside.  Inside the shell is mostly empty space, with ciliated tentacles that perform a dual function of filtering food and absorbing oxygen.[212]  The cilia pump water through the shell and over the tentacles, which allows such animals to be immobile.

Another winner in the Cambrian Period was the mollusk phylum, which today comprises nearly a quarter of all marine animals.  As with arthropods and corals, mollusks developed predation-defending armor, and their variation was shells.  Mollusks include the cephalopod, bivalve, and gastropod classes.  Like brachiopods, mollusks developed “power gills,” whereby they actively pumped water across their gills using cilia, and bivalves usually also use their gills to catch food.  One early class of mollusks, which may be the first mollusks, had the repeated gill structure of the trilobites, but their gills lined the inside of their shells, which supports the idea that shells may have been developed for improving respiration first and predation-protection second.[213]  There is even evidence that a gastropod-like animal might have lived on the seashore about 510 mya and might have been the first animal to visit land.

But the most impressive dual-use innovation in mollusks is what cephalopods invented.  Their gill pumps are quite muscular and jets water over their gills.  That jet also propels the animal.  Jet propulsion is not an energy-efficient means of transportation, but the cephalopod’s ability to pass oxygen-bearing water over its gills is unmatched.  Cephalopods can live in waters too hypoxic for fish to survive.[214]  In the coming Ordovician Period, cephalopods would be apex predators of marine biomes and would hold that distinction for a long time.  Cephalopods are today’s most intelligent invertebrates; the octopus performs surprising feats of intelligence and it has the largest brain-to-body-size ratio of all invertebrates.  It is thought that the skills needed for predation stimulated cephalopodan intelligence.  Today, the nautilus is the only survivor of that lineage of Ordovician apex predators.

But the branch of the tree of life that readers might find most interesting led to humans.  Humans are in the chordate phylum, and the last common ancestor that founded the Chordata phylum is still a mystery and understandably a source of controversy.  Was our ancestor a fish?  A sea squirt?[215]  Peter Ward made the case, as have others for a long time, that it was the sea squirt, also called a tunicate, which in its larval stage resembles a fish.  The nerve cord in most bilaterally symmetric animals runs below the belly, not above it, and a sea squirt that never grew up may have been our direct ancestor.  Adult tunicates are also highly adapted to extracting oxygen from water, even too much so, with only about 10% of today’s available oxygen extracted in tunicate respiration.  It may mean that tunicates adapted to low oxygen conditions early on.  Ward’s respiration hypothesis, which makes the case that adapting to low oxygen conditions was an evolutionary spur for animals, will repeatedly reappear in this essay, as will challenges to that hypothesis.  Ward’s hypothesis may be proven wrong or will not have the key influence that he attributes to it, but it also has plenty going for it.  The idea that fluctuating oxygen levels impacted animal evolution has been gaining support in recent years, particularly in light of recent reconstructions of oxygen levels in the eon of complex life, called GEOCARBSULF and COPSE, which have yielded broadly similar results, but their variances mean that much more work needs to be performed before confidently placing oxygen levels on the geologic timescale can be done, if it ever can be.[216]  Ward’s basic hypotheses is that when oxygen levels are high, ecosystems are diverse and life is an easy proposition; when oxygen levels are low, animals adapted to high oxygen levels go extinct and the survivors are adapted to low oxygen with body plan changes, and their adaptations helped them dominate after the extinctions.[217]  The chart used to support his hypothesis has a pretty wide range of potential error, particularly in the early years, and it also tracked atmospheric carbon dioxide levels.  The challenges to the validity of a model based on data with such a wide range of error are understandable.  But some broad trends are unmistakable, as it is with other models, some of which are generally declining carbon dioxide levels, some huge oxygen spikes, and the generally seesawing relationship between oxygen and carbon dioxide levels, which a geochemist would expect.  The high carbon dioxide level during the Cambrian, of at least 4,000 PPM (the "RCO2" in the below graphic is a ratio of the calculated CO2 levels to today's levels), is what scientists think made the times so hot.[218]  (Permission: Peter Ward, June 2014)

As will be explored in this essay, all of the first four major mass extinctions of complex marine life have anoxia as a suspected contributing cause, so oxygen is a major area of interest among extinction specialists.  Whether oxygen levels were also significant contributing causes of evolutionary innovation is another area of interest today.  Again, the energy-generating superiority of aerobic respiration led to food chains.  Even if the first animals did not respire anaerobically, they adapted to aerobic respiration early on and then became dependent on it.  There would be no going back for animals; all except those few adapted to hypoxic and anoxic environments went “all in” with aerobic respiration.

An irony of fossilization is that conditions hostile to life usually left the best-preserved fossils, because nothing disturbed the sediments, which were anoxic and often sulfidic.  In the sea sediments that mark the geologic periods, white limestone and black shale are typical layers.  Limestone means oxygenated oceans, and black shales and mudstones mean anoxic conditions.  The black color means reduced carbon, as the ecosystems could not recycle the carbon and it was instead preserved into the sediments which have been the primary source of the oil and gas burned in today’s industrialized world.

Supercontinents tend to result in Canfield Oceans and land near the poles could help initiate an ice age.  For the coming geologic periods, the configurations of the continents were critical variables for determining the ecosystems that existed, whether there were anoxic oceans, greenhouse conditions, ice ages, extinction events, or adaptive radiations.  Helpful animations exist to make the configurations easier to visualize. 

The Cambrian Explosion had several phases to it, with explosions of life and mass extinctions, and a general atmospheric oxygen rise accompanied it.  Anoxic conditions coincided with extinctions.  Prokaryotes would not be that affected by what complex life was doing (although anaerobes were generally driven underground and into the seafloor), but the rise of complex life led to new ecosystems.  Before the rise of animals, the seafloor was smooth and “stiff,” but burrowing animals had profound impact on seafloor ecosystems and may have played a prominent role in creating the ecosystems themselves.  Corals created new ecosystems, as life terraformed Earth.

A recent study shows a more dramatic rise and fall during the Cambrian than the GEOCARBSULF model does, with oxygen levels seesawing and doubling to around 30% in the Late Cambrian.[219]  Those varying levels coincide with evolutionary radiations and extinctions, and questions are raised whether they were triggering causes or not.  They may have been related, and many of today’s specialists suspect that they played key causative roles.

Around 530 mya, the first brachiopods, reef-building animals, and fish appear in the fossil record, and trilobites first appear in the fossil record about 521 mya, only a few million years before a mass extinction about 517 mya, which wiped out those early reef-building organisms and nearly all of the small shelly fauna.  As happened with Ediacaran fauna, those early extinctions extinguished major portions of the ecosystems.  With the rise of DNA studies, scientists are trying to recover the tree of life’s lost portions, looking for “ghost ancestors,” which did not leave fossils that have been discovered.[220]  This is a new area of study, with current findings quite speculative, but we can be confident that many clades were born and went extinct, all the way up to the phylum level and maybe even higher, particularly in the Ediacaran and Cambrian periods, without leaving a trace in today’s known fossil record.  Specialists in these areas are always calling for more fossil-hunting, analysis with new tools, and the like.  At about 502 mya, another extinction event wiped out about 40% of marine genera, probably triggered by anoxia.

The Middle Cambrian years were the Golden Age of Trilobites, when they reached peak dominance.  It is thought that they filled vacant niches in the wake of those early mass extinctions.[221]  The early corals went extinct and the rise of demosponges followed it (those early corals are currently classified as sponges, although the issue is controversial[222]).  Sponge reefs dominated in later times, and sponges have perhaps been the most successful early animals and still thrive today.  Below is an artist's conception of the Cambrian seafloor.  (Source: Wikimedia Commons)

There is evidence that rising and falling sea levels, probably the result of a periodically growing and shrinking ice cap at the South Pole (as the continent Gondwana was there), contributed to the radiations and extinctions that marked the Cambrian.  Trilobites went through several boom-and-bust phases in the Cambrian.  Many extinctions were more local than global, but at the end of the Cambrian, most trilobites went extinct and would never dominate again.  They survived until the greatest mass extinction of all, the Permian extinction, and then disappeared from Earth, at least until the rise of paleontology and reconstructions to fascinate children and adults.  The leading hypothesis is that rising seas caused anoxia and led to the end-Cambrian extinctions at about 485 mya.[223]  That this may have coincided with a rise in atmospheric oxygen is not necessarily contradictory; all the oxygen in the world will be useless to deep-ocean and seafloor life if there are not mechanisms, primarily currents, to introduce atmospheric gases into the oceans.  Surface life can thrive in high-oxygen conditions while the seafloor dies from lack of oxygen, especially when the surface rises farther above the seafloor.  Oxygenation and anoxia during the Cambrian may well have been sporadic and regional, and research to unravel the dynamics is ongoing.[224]  If the evidence was better, the Cambrian extinction could rank among the Big Five, but we may never know.[225]  The older the fossils, the less likely they will survive subsequent geological processes.  Cambrian fossil beds discovered so far are uncharacteristically rich, and the next period, the Ordovician, is relatively impoverished.  It is suspected that unique geological and fossil-preservation processes led to the Cambrian’s gold mine of fossils.

In summary, the deadly waltz of predator and prey characterized the Cambrian, and complex ecosystems were born.  Again, from a biochemical and morphological perspective, all events since the Cambrian have been relatively insignificant, but are still fascinating and led to the bipedal ape writing these words.

It can be helpful at this juncture to grasp the cumulative impact of life's forming by harnessing energy gradients, inventing enzymes, inventing photosynthesis, inventing distributed energy generation centers that made complex cells possible, and inventing aerobic respiration.  Pound-for-pound, the complex organisms that began to dominate Earth’s ecosphere during the Cambrian Period consumed energy about 100,000 times as fast as the Sun produced it.[226]  Life on Earth is an incredibly energy-intensive phenomenon, powered by sunlight.  In the end, only so much sunlight reaches Earth, and it has always been life’s primary limiting variable.  Photosynthesis became more efficient, aerobic respiration was an order-of-magnitude leap in energy efficiency, the oxygenation of the atmosphere and oceans allowed animals to colonize land and ocean sediments and even fly, and life’s colonization of land allowed for a great leap in biomass.  Life could exploit new niches and even help create them, but the key innovations and pioneering were achieved long ago.  If humanity attains the FE epoch, new niches will arise, even of the artificial off-planet variety, but all other creatures living on Earth have constraints, primarily energy constraints, which produce very real limits.  Life on Earth has largely been a zero-sum game for several hundred million years, but the Cambrian Explosion was one of those halcyonic times when animal life had its greatest expansion, not built on the bones of a mass extinction so much as blazing new trails. 

The twin ideas of efficiency and resilience are important.  Efficiency is about getting more for less, particularly energy.  Although aerobic respiration’s energy efficiency allowed for food chains to develop, they end up creating interactions and dependencies, and the entire structure can lose its resilience when compared to simpler systems.  Remove one part of the food chain and the entire ecosystem can collapse, and it can be any part of the chain, from top to bottom.  Making systems more efficient, as the last bits of energy are wrung from the system, reduces their resilience to the real world’s surprises.  That dynamic is probably a key contributing factor of mass extinctions during the eon of complex life.  Modern ecosystems studies are making the connections clear and are being applied to the dynamics of human civilizations; C. S. Holling’s work has been seminal in this regard.[227]  Complex ecosystems pass through adaptive cycles of exploitation, conservation, release, and reorganization, and three dimensions of interaction are involved: potential, connectedness, and resilience.[228]  In general, simple systems are more stable than complex ones, which is another reason why any mass extinctions of prokaryotes, if there were any, would have been far less cataclysmic than those of complex life. 

All species live within their niches, which are always primarily energy niches, in which an organism can obtain enough energy and preserve it for long enough to produce viable offspring.  There are usually energy tradeoffs; efficiency could be sacrificed for rate of ingestion, so that efficiency was reduced but input was increased enough so that the increased cost of obtaining it was worthwhile, such as with hindgut fermenters.  The primary measure of an organism’s success is its energy surplus, which is related to resilience.  As an example, today a trout can live in a fast-moving current where food quickly arrives, which is efficient from an input perspective, but the energy spent swimming to maintain a presence in the current reduces the net energy surplus.  A slower stream will provide less food per unit of time, but it also takes less energy to live there.  In trout studies, the dominant trout will live where the optimal energy tradeoff exists, which leads to the greatest energy surplus.  Less dominant trout will be pushed into the faster water, and the least competitive trout will be pushed into calm water and slowly starve.  No species will last for long if it does not have a high enough energy surplus so that it can survive the vagaries of existence.  The energy surplus issue has not been emphasized in biology during the past century, as the “fitness” of a species has been emphasized, but it is the key variable for understanding species fitness.[229]

Also, just as no fundamentally new body plans appeared after the Cambrian Explosion, modern ecosystems seem constrained by body size.  Body sizes have similar “slots,” and body sizes outside of those slots are relatively rare.  However, successful innovation usually happens at the fringes.[230]  The fringes are where survival is marginal and innovations carry a high risk/reward ratio.  Most innovations fail, but a successful one can become universally dominant, such as those biological innovations that are considered to have happened only once.  There have been countless failed biological innovations during life’s history on Earth, many of which might have seemed brilliant but did not survive the rigors of living.

The rise of life was based on energy, information, and the ability to manipulate them.  Just as the foundation of complex life remained basically unchanged since the Cambrian Explosion, energy systems form the foundations for all ecosystems and civilizations.  While the superstructure can change and can seem radical at times, the foundation dictates what kind of superstructure can exist.  A huge superstructure built on a small foundation, if it can be built at all, will not be very resilient (the first earthquake or storm levels it), and will not last long.  Today, industrialized civilization is burning through its foundational energy sources a million times as fast as they were created and will largely deplete all of them in this century at the current trajectory.  On the geologic timescale, the rise and fall of humanity may happen in the blink of an eye and create more ecosystem devastation than the asteroid that wiped out the dinosaurs; it would happen faster than all previous mass extinctions other than that asteroid’s effect.  Arthropods may then come to rule the world once again.


Complex Life Colonizes Land

World map in late Devonian (c. 370 mya) (Source: Wikimedia Commons) (map with names is here)


Chapter summary:

With the extinction that ended the Cambrian Period, animal life’s greatest period of innovation was finished, but the next geological period, the Ordovician (c. 485 to 443 mya), still had dramatic changes.  The Ordovician would not see any new phyla of note, but the Ordovician was a time of great diversification, as new niches were created and inhabited.  They reached modern levels of abundance and diversity.  Food chains became complex and could be called food webs.  More so than the Cambrian Explosion, the Ordovician “explosion” was an adaptive radiation.[231] 

The continental configuration when the Ordovician began was like the Cambrian’s, with shallow hot tropical seas.  The Paleo-Tethys Ocean began forming in the Ordovician.  The first reefs that would impress modern observers were formed in the Ordovician.  Different animals built the corals (1, 2, 3) than Cambrian reef builders; but there were no schools of fish swimming around them, as the Ordovician predated the rise of fish.  Fish existed (1, 2, 3), but they were armored, without jaws, and lived on the seafloor.  The first sharks may have appeared in the Ordovician, but because they had cartilaginous skeletons, the fossil record is equivocal.  Some fish had scales, and an eel-like fish might have even had the first teeth.  Teeth and claws were early energy technologies; energy applied by muscles could be concentrated to hard points or plates that could crush or penetrate other organisms or manipulate the environment. 

Planktonic animals became prevalent and were critical aspects of the growing food chains.  Trilobites and brachiopods flourished, but the Ordovician’s most spectacular development might have been the rise of the mollusk.  Bivalves exploded in number and variety, and nautiloid cephalopods became the apex predators of Ordovician seas, and some were gigantic.  One species reached more than three meters long and another reached six meters or more.  The largest trilobite yet found lived in the late Ordovician.  Below is an artist's conception of the Ordovician seafloor.  (Source: Wikimedia Commons)

Gigantism is a controversial subject.  Islands often produce giant and dwarf species, which results from energy dynamics; in general, on islands, large species tend to get smaller and small species tend to get larger.  A landmark study of polar gigantism among modern seafloor crustaceans concluded that the oxygen level was the key variable.[232]  Recall that colder water can absorb more oxygen.  Size is a key “weapon” used in evolution’s arms race.  The bigger the prey, the better it could survive predation, and the bigger the predator, the more likely it would kill a meal.  Since the 1930s, there have been continual controversies over size and metabolism, energy efficiency, complexity, structural issues such as skeleton size and strength, and so on.[233]  In its final cost/benefit analysis, complex life decided that bigger was better, and much larger animals lived in the Ordovician than in the Cambrian.  Bigger meant more complex, and more complexity meant more parts, usually more moving parts, and those required energy to run.  Whether increasing size was due to more oxygen availability, more food availability, greater metabolic efficiency, reduced risk of predation, or increased predatory success, it was always a cost/benefit analyses and the primary parameter was energy: how to get it, how to preserve it, and how to use it.[234]  The "analysis" was probably never a conscious one, but result of the "analysis" was what survived and what did not.

Peter Ward suggested that the superior breathing system of nautiloids led to their dominance.[235]  Nautiloids do not appear in the fossil record until the Cambrian’s end.  Only one family of nautiloids survived the end-Cambrian extinction and they quickly diversified in the Ordovician to become dominant predators.  They replaced arthropods atop the food chain.  During the Ordovician, nautiloids developed a sturdy build and began spending time in deep waters, where their superior respiration system enabled them to inhabit environments that would-be competitors could not exploit. 

Although the Ordovician’s shallow seas were fascinating abodes of biological innovation, of perhaps more interest to humans was the first colonization of our future home: land.  Land plants probably evolved from green algae, and although molecular clock studies suggest that plants first appeared on land more than 600 mya, the first fossil evidence of land plants appeared about 470 mya, in the mid-Ordovician, which would have been moss-like plants, and they seem to have preceded land animals by about 40 million years.[236]

The Ordovician was characterized by diversification into new niches, even creating them, but those halcyonic times came to a harsh end in the first of the Big Five mass extinctions: the Ordovician-Silurian mass extinction.  The event transpired about 443 mya, and was really two extinction events that combined to comprise the second greatest extinction event ever for marine animals.  About 85% of all species, nearly 60% of all genera, and around 25% of all families went extinct.[237]  The ultimate cause probably was the drifting of Gondwana over the South Pole, which triggered a short, severe ice age.  As our current ice age demonstrates, ice sheets can advance and retreat in cycles, and they appeared to do so during the Ordovician-Silurian mass extinction.  There is evidence that the ice age was triggered by the volcanic event that created the Appalachian Mountains.  Newly exposed rock from volcanic mountain-building is a carbon sink due to basalt weathering (as contrasted with silicate weathering – volcanoes spew basalt) of that fresh volcanic rock.  The combination of Appalachian volcanism ending and subsequent sequestering of atmospheric carbon dioxide may have triggered an ice age.  The ice age waxed and waned for about 40 million years, but some events were calamitous.

Two primary events drove the first phase of the Ordovician-Silurian mass extinction: the ice age caused the sea level to drop drastically and the oceans became colder.  When sea levels fell at least 50 meters, the cooling shallow seas receded from continental shelves and eliminated entire biomes.[238]  Many millions of years of “easy living” in warm, shallow seas were abruptly halted.  Several groups were ravaged, beginning with the plankton that formed the food chain’s base.  About 50% of brachiopod and trilobite genera went extinct in the first phase, and cool-water species filled the newly vacant niches.  Bivalves were largely found in seashore communities, were scourged when the seas retreated, and lost more than half of their genera.  Nautiloids were also hit hard, and about 70% of reef and coral genera went extinct.  The retreating seas somehow triggered the extinctions, and whether it was due to simply being exposed to the air or changing and cooling currents, nutrient dispersal patterns, ocean chemistry, and other dynamics is still debated, and those extinction events are being subjected to intensive research in the early 21st century.

After as little as a half-million years of bedraggled survivors adapting to ice age seas, the ice sheets retreated and the oceans rose.  The thermohaline circulation of the time may have also changed, and upwelling, anoxia, and other dramatic chemistry and nutrient changes happened.  Those dynamics are suspected to be responsible for the second wave of extinctions.  There also seem to have been hydrogen sulfide events.[239]  Atmospheric oxygen levels may have fallen from around 20% to 15% during the Ordovician, which would have contributed to the mass death.  Seafloor anoxia seems to have been particularly lethal to continental-shelf biomes, possibly all the way to shore.  It took the ecosystems millions of years to recover from the Ordovician-Silurian mass extinction, but basic ecosystem functioning was not significantly altered in the aftermath, which is why a mass extinction during the Carboniferous has been proposed as a more significant extinction event.  The first major oil deposits of the Middle East were laid down by the anoxic events that ended the Ordovician.  Most oil deposits were formed in the era of dinosaurs and the processes of oil deposit formation were similar; they were related to oceanic currents.  When currents came to shore via the bottom and the prevailing winds blew the top waters offshore, it became a nutrient trap and anoxic sediments could form.  When the winds blew onshore and left via the bottom, the waters became clear and are known as nutrient deserts.  The oscillation between nutrient traps and nutrient deserts can be seen in oil deposit sediments.[240]  In the mid-20th century, Soviet scientists revived an old hypothesis that oil was not formed from organic marine sediments, a variation of which was also championed by Thomas Gold, but improving tools and investigation invalidated those hypotheses.  No petroleum geologists today seriously consider the abiogenic origin of hydrocarbons.  Oil sediment formation events seem related to mantle and crust processes that created high sea levels and anoxic events, and the last great one was in the Oligocene, which formed more than 10% of the world's oil deposits.[241]

The Silurian Period, which began 443 mya, is short for the geologic time scale, lasting “only” 24 million years and ending about 419 mya.  The Silurian was another relatively hot period with shallow tropical seas, but Gondwana still covered the South Pole.  But the ice caps eventually shrank, which played havoc with the sea level and caused minor extinction events (1, 2, 3), the last of which ended the Silurian and also created more Middle East oil deposits.  Reefs made a big comeback, extending as far as 50 degrees north latitude (farther north than where I live in Seattle).  According to the GEOCARBSULF model, oxygen levels rose greatly during the Silurian and rebounded from a low in the mid-Ordovician; it may have reached 25% by the early Devonian, which followed the Silurian.  Coincident with rising oxygen levels, more giants appeared.  Scorpion-like eurypterids were the largest arthropods ever, and the largest specimen reached nearly three meters near the Devonian’s oxygen highpoint.  The first land-dwelling animals - spiders, centipedes, and scorpions - came ashore during the Silurian between 430 mya and 420 mya.  The first insects appeared about that time and all of the first insects flew.[242]  As of 2014, Donald Canfield believed that the gigantism among arthropods and other oxygen effects were due to Earth's atmosphere beginning to reach modern levels for the first time in the eon of complex life, not that it reached higher than modern levels.[243]  I expect the oxygen controversy to outlive me. 

Beetles first appeared in the fossil record in the late Carboniferous.  Arthropods became dominant predators once again, although cephalopods patrolled the reefs as apex predators.  Brachiopods reached their greatest size ever at that time, although the succeeding Devonian Period has been called the Golden Age of Brachiopods.[244]  As oxygen levels rose, trilobites lost segments and, hence, gill surface area, which may have been an ultimately extinctive gamble.  When the Devonian extinction happened during anoxic events, trilobites steeply declined and thereafter only eked out an existence until the Permian extinction finally eliminated them from the fossil record.  Fish began developing jaws in the Silurian, which was a great evolutionary leap and arguably the most important innovation in vertebrate history.  Jaws, tentacles, claws… prehensile features were advantageous, as animals could more effectively manipulate their environments and acquire energy.  On land the colonization began, as mossy “forests” abounded, and the first vascular plants made their appearance, although they were generally less than a hand-width tall when the Silurian ended, and nothing reached even waist-high.

Oxygen levels appeared to keep rising into the early Devonian (c. 419 mya to 359 mya) and then declined over most of the period.  The Devonian marked the dramatic rise of land plants and fish in what is called the Golden Age of Fishes, and that period saw the first vertebrates that enjoyed a terrestrial existence.  Armored fish supplanted arthropods and cephalopods during the Devonian as the new apex predators and weighed up to several tonsSharks also began their rise.  The Devonian has been called the Golden Age of Armored Fish.[245]  Rising oxygen levels have been proposed as causing the spread of plants and large predatory fish, and a school of thought challenges high-oxygen reasons for many evolutionary events.  Nick Butterfield is a prominent challenger.[246]

Bony fish (both ray-finned and lobe-finned) first appeared in the late Silurian and abounded in the Devonian.  All bony fish could breathe air in the Devonian, which provided more oxygenated blood to their hearts.[247]  Ray-finned fish largely lost that ability and their lungs became swim bladders, which aided buoyancy, like gas-filled nautiloid shells.  Ray-finned fish can respire while stationary (unlike cartilaginous fish, and sharks most famously) and are the high-performance swimmers of aquatic environments; they comprise about 99% of all fish species today, although they were not dominant during the Devonian.  All fish devote a significant portion of their metabolism to maintaining their water concentrations.  In salt water, fish have to push out salt, and in fresh water, they have to pull in water, using, on average, about 5% of their resting metabolism to do so.  Brine shrimp use about a third of their metabolic energy to manage their water concentration. 

Today’s lungfish are living fossils that first appeared at the Devonian’s beginning, which demonstrates that the ability to breathe air never went completely out of fashion.  That was fortuitous, as one class of lobe-finned fish developed limbs and became our ancestor about 395 mya.  The first amphibians appeared about 365 mya.  In the late Devonian, lobe-finned and armored fish were in their heyday.  The first internally fertilized fish appeared in the Devonian, for the first mother that gave birth.[248]  A lightweight descendent of nautiloids appeared in the Devonian, and ammonoids subsequently enjoyed more than 300 million years of existence.  They often played a prominent role, until they were finally rendered extinct in the Cretaceous extinctionNautiloids retreated to deep-water ecosystem margins and still exploit that niche today.

Land colonization was perhaps the Devonian’s most interesting event.  The adaptations invented by aquatic life to survive in terrestrial environments were many and varied.  Most importantly, the organism would no longer be surrounded by water and had to manage desiccation.  Nutrient acquisition and reproductive practices would have to change, and the protection that water provided from ultraviolet light was gone; plants and animals devised methods to protect themselves from the Sun’s radiation.  Also, moving on land and in the air became major bioengineering projects for animals.  Breathing air instead of water presented challenges.  The pioneers who left water led both aquatic and terrestrial existences.  Amphibians had both lungs and gills, and arthropods, whose exoskeletons readily solved the desiccation and structural support problems, evolved book lungs to replace their gills, which were probably book gills.

All such developments had to happen in water, first, for a successful move to land.[249]  The evidence seems to support the idea that life first began to colonize land via freshwater ecosystems, which provided a friendlier environment than seashores do.  The first arthropods ashore were largely detritivores, eating dead plant matter, and what followed added live plants and early detritivores to their diets.[250]  The land-based ecosystems that plants and arthropods created became nutrient sources that benefited shoreline and surface communities, but the vertebrate move to land was not initiated by the winners of aquatic life.  To successful aquatic animals, the shore was not a new opportunity to exploit but a hazardous boundary of existence best avoided.  Tetrapodomorphs probably made the vertebrate transition to land as marginal animals eking out a frontier existence.[251]  The fins that became limbs originally developed for better swimming, and further muscular-skeletal changes enabled them to exploit opportunities on land.  Two key reasons for the migration onto land may have been for basking (absorbing energy) and enhanced survival of young from predation (preserving energy).[252]  The five digits common to limbed vertebrates were set around this time; early tetrapodomorphs had six, seven, and eight digits, and the digital losses were probably related to using feet on land.[253]

But plants had to migrate before animals did, as they formed the terrestrial food chain’s base.  Along with desiccation issues, plants needed structures to raise them above the ground, roots, a circulatory system, and new means of reproduction.  Large temperature swings between day and night also accompanied life on land.  Plants developed cuticles to conserve moisture, a circulatory system that piped water from the roots up into the plant and transported nutrients where they were needed, and plant photosynthesis needed water to function.  Vascular plants pumped water through their tissues in tubes by evaporating water from their surface tissues and pulling up more new water behind the evaporating water via the “chain” of water’s hydrogen bonds.  The last common ancestor of plants and animals reproduced sexually, and sexual reproduction is how nearly all eukaryotes reproduce today, although many ways exist to reproduce asexually.  The first vascular plants are considered to have attained their height in order to spread their spores.[254]  The Rhynie chert in Scotland is the most famous fossil bed that records complex life’s early colonization of land.

The early Devonian was a time of ground-hugging mosses and a strange, lichen-like plant that towered up to eight meters tall.  The oldest vascular plant division (“division” in plants is equivalent to “phylum” in animals) still existing first appeared about 410 mya, and today’s representatives are mostly mosses.  In the late Devonian, horsetails and ferns appeared and still exist.  Seed plants also developed in the late Devonian, which enabled plants to quickly spread to higher and dryer elevations and cover the landmasses, as seed plants did not need a water medium to reproduce as spore-based systems did.  In spore systems, which are partly asexual but have a sexual stage, a water film was required for the sperm to swim to the ovum.  The first trees appeared about 385 mya (1, 2), could be ten meters tall, and formed vast forests, but reproduced with spores and so needed moist environments.  The first rainforests appeared in the Devonian and reached their apogee in the Carboniferous.  Those rainforests produced Earth’s first thick coal beds.  The Devonian was the Cambrian Explosion for plants and enabled animals to colonize land.  The plants that best succeeded in the Devonian were those with the highest energy efficiencies, which involved size, stability, photosynthesis, internal transport, and reproduction.[255]  Plants had different dynamics of extinction than animals did, as plants are more vulnerable to climate change and extinction via competition, but are less vulnerable to mass extinction events than animals.[256]

One of the most important plant innovations was lignin, which is a polymer whose original purpose appears to have been creating tubes for water transport, and was also used to help provide structural support so that trees could grow tall and strong.  Without lignin, there would not have been any true forests and probably not much in the way of complex terrestrial ecosystems.  Lignin was also responsible for forming the coal beds that powered the early Industrial Revolution, but that coal-bed formation would not happen in earnest until the next geologic period, the aptly named Carboniferous.  It took more than a hundred million years for organisms to appear that could digest lignin.  A class of fungus gained the ability to digest lignin about 290 mya, and by that time, most of what became Earth’s coal deposits had already been buried in sediments.[257]  As with other seminal developments in life’s history, the ability to digest lignin seems to have evolved only once.  The enzyme that fungi use to digest lignin has also been found in some bacteria, but fungi are the primary lignin-digesters on Earth.

From a biomass perspective, the Devonian’s primary change was the proliferation of land plants.  Below is an artist's conception of a Devonian forest.  (Source: Wikimedia Commons)

Land plants comprise about half of Earth’s biomass today and prokaryotes provide the other half.  Terrestrial biomass is 500 times greater than marine biomass, and terrestrial plants have about a thousand times the biomass of terrestrial animals, so animals constitute less than 0.1% of Earth’s biomass.  The ecologies of marine and terrestrial environments are radically different.  Virtually all primary producers in marine environments are completely eaten and comprise the food chain’s foundation, while less than 20% of land plant biomass is eaten.

Creating the huge biomass of land-based ecosystems meant that carbon was removed from the atmosphere.  Also, root systems were a new phenomenon, with dramatic environmental impact.  Before the rise of vascular plants, rain on the continents ran to the global ocean in sheets and braided rivers.  Every rainfall ran toward the oceans in a flash flood, as happens in deserts today.  Plant roots stabilized riverbanks and form the rivers that we are familiar with today.  Also, roots broke up rock, accelerated weathering, and created soils.  Plants break down rock five times as fast as other geophysical processes will.[258]  The forests and soils created a huge “sponge” that absorbed precipitation, which the resultant ecosystems depended on.  Vast nutrient runoffs from land into the ocean were stimulated by plants’ colonization of land, which in turn stimulated ocean life.  The reefs of the Devonian were the greatest in Earth’s history and reached about ten times the area of today’s reefs, with a total area about equal to half of Europe, of about five million square kilometers (two million square miles).[259]

Plants and trees created a “boundary layer” of relatively calm air near the ground that became the primary abode of most land animals.  Also, forests created a positive feedback in which moisture was recycled in the forests and kept them moister than purely ocean-sourced precipitation would.  Today, somewhere between 35% to 50% or more of the rain that falls in the Amazon rainforest is recycled water via transpiration.[260]  Transpiration also cools the plants via the latent heat of vaporization, as well as the resultant cloud cover.[261]  Transpiration, by the way it sucks water from the soils, maintains a negative pressure on soils and keeps them aerated.  Waterlogged soils cannot support the vast ecosystems of forest soils, so trees are needed to maintain the soil’s dynamics that support the base of the forest ecosystem.  Rainforest processes thus create positive feedbacks that maintain the rainforest.  Conversely, the rampant deforestation of Earth’s rainforests in the past century has created negative feedbacks that further destroyed the rainforests. 

Forests were a radical innovation that has not been seen before or since.  Trees were Earth’s first and last truly gigantic organisms, and the largest trees dwarfed the largest animals.  Why did trees grow so large?  It seems to be because they could.  Land life gave plants opportunities that aquatic life could not provide, and plants “leapt” at the chance.  Lignin, first developed for vascular transport, became the equivalent of steel girders in skyscrapers.  In the final analysis, trees grew tall to give their foliage the most sunlight and to use wind and height to spread their seeds, and in the future that height would help protect the foliage from ground-based animal browsers.  The height limit of Earth’s trees is an energy issue: the ability to pump water to the treetops.[262]  Arid climates prevent trees from growing tall or even growing at all.  Energy availability limits leaf size, too.[263]  From an ecosystem’s perspective, the great biomass of forests was primarily a huge store of energy; trees allowed for prodigious energy storage per square meter of land.  That stored energy ultimately became a vast resource for the forest ecosystem, as it eventually became food for other life forms and the basis for soils, which in turn became sponges to soak up precipitation and recycle it via transpiration.  Trees created the entire ecosystem that depended on them.

Energy enters ecosystems primarily via the capture of photon energy by photosynthesis.  Only so much sunlight reaches Earth and photosynthesis can only capture so much.  The energy “budget” available for plants has constraints, and the question is always what to do with it.  An organism can break bonds between atoms and release energy or bind atoms together to build biological structures, which uses energy (exothermic reactions release energy, while endothermic reactions absorb energy).  Photosynthesis is endothermic, and in biological systems, endothermic reactions are also called anabolic, as they invest energy to build molecules, which is how organisms grow.  Catabolic reactions break down molecules in exothermic reactions that release energy for use.  Plants faced the same decisions that societies face today: consumption or investment?  Only with an energy surplus can there be investments, such as for infrastructure.  Plants invested in trunk-and-branch infrastructure to place their energy-collecting and seed-spreading equipment in the best possible position.  Plants race for the sky, and trees represent the biggest energy investment of any type of organism.  On average, today’s plants use a little more than half of the energy that they capture via photosynthesis (called gross primary production) for respiration.  Growing forests use most of that gross primary production to grow (called net primary production), and when the structural limits have been reached, most energy is consumed via respiration to run life processes within the infrastructure.[264]  Animal development is similar.  When humans began building cities and urban infrastructures, the basic process was the same.

Most marine phyla were unable to manage the transition to land and remain aquatic to this day.  Arthropods found a way, and scorpions, spiders, and millipedes were early pioneers.  The insect and fish clades comprise the most successful terrestrial animals today, as fish led to all terrestrial vertebrates.  Gastropods made it to land, mainly as snails and slugs, as did several worm phyla, but the rest of aquatic life generally remained water-bound.  Also, many animal clades have moved back-and-forth between water and land, usually hugging the shoreline, sometimes in a single organism’s life cycle, which blurred the terrestrial/aquatic divide at times.  The first fish to venture past shore seem to have accomplished it in the mid-Devonian, and colonizing land via freshwater environments was a prominent developmental path.

Although the first insects appeared in today’s fossil record about 400 mya, they were fairly developed, which meant that they have an older lineage, probably beginning in the Silurian.  The first land animals would have been vegetarians, as something had to start the food chain from plants, and early insects were adapted for plant-eating.  Plants would have then begun to co-evolve with animals as they tried to avoid being eaten.

When life colonized land, global weather systems began dramatically impacting life, as land plants and animals would be at the mercy of the elements as never before, and forests and deserts formed.  The continents also began coming together and eventually formed Pangaea in the Permian, and converging plates meant subduction and mountain-building.  Mountains in the British Isles and Scandinavia were formed in the Devonian, the Appalachians became larger, and the mountains of the USA’s Great Basin also began developing.  Colliding tectonic plates can build mountains, and mountain ranges greatly impacted weather systems during terrestrial life’s future, which also profoundly influenced oceanic ecosystems.

As with previous critical events, such as saving the oceans and life on Earth itself, life helped terraform Earth.  But the late Devonian is an instance when the rise of land plants may have also had Medean effects.  Carbon dioxide sequestering, which reduced the atmosphere’s carbon dioxide concentration by up to 80%, may have cooled Earth’s surface enough so that an ice age began and another one of Earth’s mass extinctions began.  As with the Ordovician extinction, the ultimate cause for the Devonian extinctions seems to have been rising and falling sea levels, associated with growing and receding ice caps, as Gondwana still covered the South Pole.  Devonian extinction events began happening more than 380 mya, and a major one happened about 375 mya, called the Kellwasser event.  The reasons for the Kellwasser event are today generally attributed to the water becoming cold and anoxic.[265]  A bolide impact has been invoked in some scientific circles, but the evidence is weak.[266]  Mountain-building and volcanic events also happened as continents began colliding to eventually form Pangaea (and the resultant silicate and basaltic weathering removed carbon dioxide from the atmosphere), and those dynamics may have been like what precipitated the previous major mass extinction.[267]  Black shales abounded during and after the Kellwasser event, and they are always evidence of anoxic conditions and how the oil deposits initially formed.  However, the Kellwasser event anoxia may have not only been due to low atmospheric oxygen, but was also the result of eroding the newly exposed land and the detritus of the new forest biomes, which created a vast nutrient runoff into the oceans that may have initiated huge algal blooms that caused anoxic events near shore.[268]

Unlike the short, severe Ordovician events, the Devonian extinctions may have stretched for up to 25 million years, with periodic pulses of extinction.  The Kellwasser event seems to be comprised of several extinction events, and when they ended, at least 70% of all marine species went extinct and the greatest reefs in Earth’s history were 99.98% eradicated.  It took 100 million years before major reef systems again appeared.[269]  Armored fish and jawless fish lost half of their species, and armored fish were rendered entirely extinct in the event that ended the Devonian.

What was most relevant to humans, however, was the almost-complete extinction during the Kellwasser event of the tetrapods that had come ashore.  Tetrapods did not reappear in the fossil record until several million years after the Kellwasser event, and has even been referred to as the Fammenian Gap (the Fammenian Age is the Devonian’s last age).[270]  The Kellwasser event also appeared to be a period of low atmospheric oxygen content, and some evidence is the lack of charcoal in fossil deposits.  Recent research has demonstrated that getting wood to burn at oxygen levels of less than 13-15% may be impossible.[271]  Because all periods of complex land life show evidence of forest fires, it is today thought that oxygen levels have not dropped below 13-15% since the Devonian, but during the “charcoal gap” of the late Devonian, when the first landlubbing tetrapods went extinct, oxygen levels reached their lowest levels since the GOE, which must have impacted the first animals trying to breathe air instead of water.  During the Kellwasser event, there is no charcoal evidence at all, which leads to the notion that oxygen levels may have even dropped below 13%.[272]  This drop may be related to severe climatic stresses on the new mono-species forests, which are probably related to the ice age that the forests helped bring about due to their carbon sequestering.  That is an attractively explanatory scenario, but the controversy and research continues.  The first seed plants probably appeared before the Kellwasser event, but it was not until after the Fammenian Gap that seed plants began to proliferate.[273]

The Kellwasser event ended the first invasion of land by vertebrates and created an evolutionary bottleneck.  Some stragglers survived the Kellwasser event, but the fossil record for the next seven million years has been devoid of tetrapod fossils with the exception of one species.[274]  After the Fammenian Gap ended about 368 mya, tetrapods renewed their invasion of land, and those tetrapods with many toes appeared in the fossil record during the second invasion.  Ichthyostega was Earth’s largest land animal in those days.  The tetrapods of the time may have not yet been true amphibians, but they were making the adjustments needed to become true land animals, such as losing their gills and improving their locomotion.  No new arthropods appeared on land during that time.

After several million years of adaptation, tetrapods seemed ready to become the dominant land animals, but then came the second major Devonian extinction event, today called the Hangenberg event.  While the ice age conditions around the Kellwasser event are debated, there is no uncertainty about the Hangenberg event; there were massive, continental ice sheets, accompanied by falling sea levels and anoxic events, as evidenced by huge black shales.[275]  The event’s frigidity was probably a key extinction factor, and anoxia was the other killing mechanism.  The Hangenberg event had devastating consequences; it meant the end of armored fish, the near-extinction of the new ammonoids (perhaps only one genus survived), oceanic eurypterids went extinct, trilobites began to make their exit as seafloor communities were devastated, lobe-finned fish reached their peak influence, and archaeopteris forests collapsed.[276]

Trees first appeared during a plant diversity crisis, and the arrival of seed plants and ferns ended the dominance of the first trees, so the plant crises may have been more about evolutionary experiments than environmental conditions, although a carbon dioxide crash and ice age conditions would have impacted photosynthesizers.  The earliest woody plants that gave rise to trees and seed plants largely went extinct at the Devonian’s end.  But what might have been the most dramatic extinction, as far as humans are concerned, was the impact on land vertebrates.  During the Devonian extinction about 20% of all families, 50% of all genera, and 70% of all species disappeared forever.

There seems to have been convergent evolution among early tetrapods, but they were beaten back twice during the late- and end-Devonian extinction events, and what emerged the third time was different from what preceded it.[277]  As with many mass extinction events, evolution’s course was significantly altered in the extinction’s aftermath.  As with studies of human history, events are always contingent and not foreordained in Whiggish fashion.  Although the increase in “intelligence” may well be an inherent purpose of being in physical reality, the evolutionary path to the man writing these words had false starts, “detours,” singular events, expansions, bottlenecks, catastrophes, and the like.  Evolutionary experiments on other planets probably had radically different outcomes.  A mystical source that I respect once stated that there are one million sentient species in our galaxy, with a diversity that is staggering, and from what I have been exposed to (and here), I will not challenge it.


Making Coal, the Rise of Reptiles, and the Greatest Extinction Ever

World map in early Carboniferous Period (c. 340 mya) (Source: Wikimedia Commons) (map with names is here)

World map at end of Carboniferous Period (c. 300 mya) (Source: Wikimedia Commons) (map with names is here)

World map in late Permian Period (c. 260 mya) (Source: Wikimedia Commons) (map with names is here)

Chapter summary:

The period succeeding the Devonian is called the Carboniferous (c. 359 to 299 mya), for reasons that will become evident.  The Hangenberg event cut short the second attempt of vertebrates to invade land and there was another 14-million-year gap in the fossil record called the Tournaisian Gap, which is part of Romer’s Gap (which is considered to be about a 30-million-year gap).[278]  After all mass extinctions, it took millions of years for ecosystems to recover, even tens of millions of years, and markedly different ecosystems and plant/animal assemblages often replaced what existed before the extinction.  The Devonian spore-forests were destroyed, and outside of the peat swamps, the tallest trees in the Tournaisian Gap were about as tall as I am, and even in the swamps, the tallest trees were about ten meters tall, as they were before the Hangenberg event.[279]

Peter Ward led an effort to catalog the fossil record before and after Romer’s Gap, which found a dramatic halt in tetrapod and arthropod colonization that did not resume until about 340-330 mya.  Romer’s Gap seems to have coincided with low-oxygen levels of the late Devonian and early Carboniferous.[280]  If low oxygen coincided with a halt in colonization, just as the adaptation to breathing air was beginning, the obvious implication is that low oxygen levels hampered early land animals.  Not just the lung had to evolve for the up-and-coming amphibians, but the entire chest cavity had to evolve to expand and contract while also allowing for a new mode of locomotion.  When amphibians and splay-footed reptiles run, they cannot breathe, as their mechanics of locomotion prevent running and breathing at the same time.  Even walking and breathing is generally difficult.  This means that they cannot perform any endurance locomotion but have to move in short spurts.  This is why today’s predatory amphibians and reptiles are ambush predators.  They can only move in short bursts, and then have to stop, breathe, and recover their oxygen deficit.  In short, they have no stamina.  This limitation is called Carrier’s Constraint.  The below image shows the evolutionary adaptations that led to overcoming Carrier's Constraint.  Dinosaurs overcame it first, and it probably was related to their dominance and the extinction or marginalization of their competitors.  (Source: Wikimedia Commons)

The heart became steadily more complex during complex life’s evolutionary journeys.  Fish hearts have one pump and two chambers.  Amphibians developed three-chambered hearts, wherein oxygenated and deoxygenated blood are not structurally separated but mix.  That arrangement is obviously not as energy-efficient as separating oxygenated and deoxygenated blood.  Some later reptiles evolved four-chambered hearts, which their surviving descendants, crocodilians and birds, possess, and somewhere along the line, mammals also evolved four-chambered hearts, perhaps before they became mammals.

While oxygen level changes of the GEOCARBSULF model show early fluctuations that the COPSE model does not, both models agree on a huge rise in oxygen levels in the late Devonian and Carboniferous, in tandem with collapsing carbon dioxide levels.  There is also virtually universal agreement that that situation is due to rainforest development.  Rainforests dominated the Carboniferous Period.  If the Devonian could be considered terrestrial life’s Cambrian Explosion, then the Carboniferous was its Ordovician.  In the Devonian, plants developed vascular systems, photosynthetic foliage, seeds, roots, and bark, and true forests first appeared.  Those basics remain unchanged to this day, but in the Carboniferous there was great diversification within those body plans, and Carboniferous plants formed the foundation for the first complex land-based ecosystems.  Ever since the Snowball Earth episodes, there has almost always been a continent at or near the South Pole, and the ice ages that have prominently shaped Earth’s eon of complex life probably always began with ice sheets at the South Pole, and the current ice age arguably is the only partial exception, but today’s cold period really began about 35 mya, when Antarctic ice sheets began developing

The first tree forests formed in the late Devonian, and bark is the great innovation that led to forming the Carboniferous’s vast coal deposits.  Compared to modern trees, Carboniferous trees seemed to go overboard on bark, at least partly to discourage arthropods.  Today’s trees generally contain at least four times as much wood as bark.  Those early trees had about ten times as much bark as wood, and the bark was about half lignin.  Lepidodendron trees dominated the Carboniferous rainforest and could grow 30 meters tall.  Because it took more than a hundred million years for life to learn to break down lignin, that early lignin did not degrade via biological processes.  The early Carboniferous was warm, even with a small ice cap at the South Pole, and Earth’s first rainforests appeared in the late Devonian and again proliferated in the Carboniferous.  The Carboniferous lasted from about 360 mya to 300 mya and was the Golden Age of Amphibians, as the rainforest was largely global in extent and swamps abounded.  Amphibians were the Carboniferous’s apex predators on land, and some reached crocodile size and acted like them.

Artists have been depicting Carboniferous swamps for more than a century, and the cliché image includes a giant dragonfly.  That giant dragonfly represents a key Carboniferous issue and perhaps why the period ended.  That giant, and others like it, appeared in the fossil record about 300 mya, when oxygen levels were Earth’s highest ever, at somewhere between 25% and 35%.  The almost universally accepted reason for that high oxygen level is that burying all of that lignin for the entire Carboniferous Period removed carbon dioxide from the atmosphere in vast amounts.  Today, the estimate is that carbon dioxide fell from about 1,500 PPM at the beginning of the Carboniferous to 350 PPM by the end, which is lower than today’s value.  That tandem effect of sequestering carbon and freeing oxygen not only may have led to huge arthropods and amphibians, but also intensified the ice age that ended the Carboniferous.  The idea that high oxygen levels led to those giants was first proposed more than a century ago and dismissed, but has recently come back into favor.  Flying insects have the highest metabolisms of all animals, but they do not have diaphragmatic lungs as mammals have, or air sac lungs as birds have, and although they may have some way of actively breathing by contracting their tracheas, it is not the bellows action of vertebrate lungs.  The two primary hypotheses for early insect gigantism is that high oxygen, as well as a denser atmosphere (the nitrogen mass would not have fallen, so increased oxygen would have added to the atmosphere’s mass), would have enabled such leviathans to fly, and the other is that flying insects got a head start in the arms race and could grow large until predators that could catch them evolved.  The late Permian had an even larger dragonfly, when oxygen levels had crashed back down.  The evolution of flight is another area of great controversy, and insects accomplished it long before vertebrates did.  The general idea is that flight structures evolved from those used for other purposes.  For insects, wings appear to have evolved from aquatic “oars,” and gills became lungs.[281]  Reptiles did not develop flight until the Triassic, and only glided in the Permian.[282]

But it was not only flying insects that became huge: giant millipedes, scorpions, and other arthropods also lived in the Carboniferous, such as mayflies with half-meter wingspans.  The giant millipede (more than two meters long) has been featured in popular culture as a nightmare creature, although it was vegetarian.  The largest freshwater fish ever lived in the Carboniferous and reached seven meters long.[283]  The high-oxygen hypothesis is challenged for giant insects and giant animals in general, and the controversy will probably continue for many more years.[284]

The Carboniferous also marked the rise of reptiles, which first appeared between 320 and 310 mya.  The very term reptile has become rather informal with the rise of cladistics, as birds and mammals descended from “reptiles” but are not called that.  The term paraphyletic refers to groupings such as reptiles, in which part of the clade is not classified in the named group; monophyletic clades (beginning with the last common ancestor and including all descendants) are tidier and scientists often prefer them.[285]  Although the issue, as usual, is controversial today, it seems that amphibian and reptilian ancestors may have descended from different groups of tetrapods, and some seemingly transitional animals added to the controversy.[286]  But the idea that reptiles are descended from amphibians is still prominent.  Most importantly, reptiles were the first amniotes, a clade that includes birds and mammals, which do not need to lay their eggs in water and allowed reptiles to become independent of rainforests and swamps.  Reptiles then colonized niches previously unavailable to amphibians.  The first reptiles were small and ate insects, and laying eggs in trees may have been a solution to arboreal life.[287]  Seed plants and amniotes could reproduce on dry land, and their success greatly expanded terrestrial ecosystems.

Amniotes are primarily classified by the number of holes in their skulls.  The earliest reptiles may have had skulls like amphibians, with only holes for eyes and nostrils.  In some early reptiles, a hole developed behind the eye, probably for attaching jaw muscles, and animals with such skulls are called synapsids; mammals evolved from that line, and are essentially the only survivors of it.  Near the Carboniferous’s end at about 300 mya, skulls with two holes behind the eye developed, probably for anchoring more powerful jaw muscles.  Animals with those skulls are called diapsids, and one line of diapsid descendants eventually ruled Earth as dinosaurs.  Dinosaurs had the greatest terrestrial jaws of all time, which is the primary energy acquisition equipment of vertebrates.  Complex life’s arms race reached its ultimate expression in dinosaurs, with the fearsome teeth and jaws of the late-Cretaceous’s Tyrannosaurus rex matched against the spear-and-shield arrangement of Triceratops.  Jurassic dinosaurs such as Stegosaurus, with its thagomizer, would not have been easy meals for predators such as Allosaurus.  Turtles are today generally considered to be diapsids that lost their skull holes, and would otherwise seem to be anapsids.

In the oceans, the Carboniferous is called the Golden Age of Sharks, and ray-finned fish arose to a ubiquity that they have yet to fully relinquish.  Ray-finned fish probably prevailed because of their high energy efficiency.  Their skeletons and scales were lighter than those of armored and lobe-finned fish, and their increasingly sophisticated and lightweight fins, their efficient tailfin method of propulsion, changes in their skulls, jaws, and new ways to use their lightweight and versatile equipment accompanied and probably led to the rise and subsequent success of ray-finned fish in the Carboniferous and afterward.[288]  Foraminifera, which are amoebic protists, rose to prominence for the first time in the Carboniferous.  Reefs began to recover, although they did not recover to pre-Devonian conditions; those vast Devonian reefs have not been seen again.  Today’s stony corals did not appear until the Mesozoic Era.  Trilobites steadily declined and nautiloids developed the curled shells familiar today, and straight shells became rare.  The first soft-bodied cephalopods, which were ancestral to squids and octopi, first appeared in the early Carboniferous, but some Devonian specimens might qualify.  Ammonoids flourished once again, after barely surviving the Devonian Extinction.  This essay is only focusing on certain prominent clades, and there are many animal phyla and plant divisions.  The early Carboniferous, for example, is called the Golden Age of Crinoids, which are a kind of echinoderm, which is a phylum that includes starfish.[289]  The crinoids had their golden age when the fish that fed on them disappeared in the end-Devonian extinction.  Earth’s ecosystems are vastly richer entities than this essay, or any essay, can depict.

In the early Carboniferous, the continents were still somewhat dispersed but began merging into the supercontinent called Pangaea.  The period from the Late Devonian extinction event to the late Permian about 260 mya is also called the Karoo Ice Age, which had various stages of ice sheet development.  It was the last ice age before the current ice age.  About 325 mya, there was a marine extinction that some have argued should be a Big Five mass extinction, but others are doubtful, and the authors of the argument re-ranked that extinction to sixth in significance.[290]  It was caused by fluctuating sea levels due to the ice sheet advances and retreats and the continental uplift that resulted from the continents colliding to form Pangaea.  The Mississippian Epoch ended with that extinction and the Pennsylvanian Epoch began.  That growing ice cap eventually destroyed the Carboniferous rainforest.  Cooler oceans have less evaporation and therefore produce drier climates; that dynamic began reducing the Carboniferous rainforest, breaking them up into “patches” that kept shrinking, to eventually result in the rainforests' collapse.  Only a few rainforest pockets survived into the Permian Period.  As usual, scientists have proposed several contributory causes of rainforest collapse, but climate change is probably the ultimate cause.  The collapse of the rainforest ended the dominance of amphibians and flora and fauna adapted to warm, wet environments.  The cooler, dryer conditions that ended the Carboniferous led to the dominance of seed plants and amniotes.

When the Carboniferous rainforest collapsed beginning about 307 mya, Earth’s oxygen levels were at their highest ever.  About 75% of Earth’s coal deposits were formed in the Carboniferous, with most of it laid down in the 25-million-year Pennsylvanian Epoch.  There will never be a coal-forming period like that again on Earth, as organisms developed the ability to decay lignin about 290 mya.  Even if humans burned all fossil fuel deposits, carbon dioxide levels will never again reach the levels that preceded the Carboniferous, at many times today’s concentrations.

The Permian Period (c. 299 to 252 mya) ended with the greatest mass extinction in the eon of complex life.  The Carboniferous rainforests not only collapsed, but great deserts formed in the interior of the newly formed supercontinent of Pangaea.  Pangaea was a little scattered when it formed, with huge ice sheets at the South Pole, but by the end of the Permian, the ice age was finished and another ice age would not appear for more than 200 million years.  The continent that became North America and Europe collided with Gondwana, and a gigantic mountain range formed as a result, called the Central Pangaean Mountains.  Those mountains created climatic effects, and great deserts formed on each side of that range.  Remnants of that range include the Appalachians and part of the Atlas Mountains.  The Ural mountain range began forming during the creation of Pangaea, and the Tethys Ocean formed during the late Permian. 

Conifer forests, which I have spent my life happily hiking through, first appeared in the Permian.  Devonian forests were 10 meters tall, Carboniferous rainforests were 30 meters tall, and Mesozoic conifers reached 60 meters tall and even sequoias appeared.  Conifers were one of the early seed plants and used pollen to fertilize their seeds; that method that did not need the water that spores did.  As conifers appeared during an ice age, they are well-adapted to cold climates, which is why conifer forests are so prevalent today.  As discussed later in this essay, conifers were later displaced by flowering plants, which engaged in an unprecedented symbiosis with animals, and conifers were pushed to Earth’s cold margins.[291]  Tree ferns declined after the Carboniferous, but still exist today.

In water environments, there are not diurnal temperature changes as there are on land, so regulating body temperature was not a significant issue for aquatic animals.  The rise of reptiles created a new kind of animal, and regulating body temperature became a major challenge, and particularly in an ice age climate.  The early Permian was the Golden Age of Synapsids, as they dominated the land masses (and became the largest non-amphibious land animals to that time).  Thermoregulation was a prominent trait, with huge “sails” on the backs of large synapsids.  Dimetrodon was popular with children’s models of ancient animals (I had one in my childhood collection, along with mammoths and stegosaurs).  Animals made many adaptions to land’s temperature swings.  Today’s mammals and birds are warm-blooded, and controversy has raged whether dinosaurs were warm-blooded.  Keeping a body’s temperature within certain ranges can allow for optimal enzyme functioning.  Humans, for instance, can only survive within a narrow range of body temperature.  High temperatures kill humans because key enzymes begin falling apart and vital reactions cease.  If temperatures are too low, activation energies for vital reactions are not reached.  But maintaining an ideal body-temperature is costly; mammals and birds consume about 10-to-15 times the energy of today’s reptiles.[292]  A snake can live for a month on a good meal, while a mammal must constantly eat or hibernate.  As with other life features, those synapsid sails may have had a dual function, and the most popular hypothesis today is that it was used for “display” to attract a mate.  Sexual selection has been a major source of evolutionary change (it is almost certainly why men are larger and stronger than women), and those tremendous sails may have been an early example of enhancing a feature to attract a mate.  Dimetrodon also had different-sized teeth, which were probably distant ancestors to mammalian teeth. 

During the Permian, synapsids had great radiations, typical of golden ages.  Synapsids developed many evolutionary novelties, and one of them led to therapsids first appearing about 275 mya in the mid-Permian, just as oxygen levels began crashing, according to GEOCARBSULF.  Synapsids began to overcome Carrier’s Constraint by developing stiffer backbones, so they no longer had the serpentine gait of lizards.[293]  Therapsids were the direct ancestors of mammals and further overcame Carrier’s Constraint by evolving a more erect posture; their legs were more under them rather than splayed to their sides.  This improved their breathing ability, and that it happened during Earth’s most spectacular oxygen crash is probably no coincidence.  However, they inherited a posture that put most body weight on their front legs, so they had a “wheelbarrow” gait that still hampered their ability to breathe and run, although it was better than their synapsid ancestors.[294]  From a high of 25%-35% at the end of the Carboniferous, oxygen crashed down to around 15% by the Permian’s end.  Animals that could adapt to lower oxygen levels could dominate, and therapsids did just that and completely displaced pelycosaur synapsids, which included Dimetrodon, and huge dinocephalians dominated the mid-Permian.  The largest amphibian ever also lived in the high oxygen times of the mid-Permian.  As oxygen levels crashed in the late Permian, land animals became smaller.[295]  In the mid-Permian, synapsids began to develop a secondary palate that allowed them to breathe and chew at the same time.  Therapsid jaws became more powerful and their teeth became more diverse than synapsid teeth.  Such innovations typically improved an animal’s energy efficiency, and thus were favored innovations.  Dimetrodon disappeared about 272 mya, and at 270 mya there was a mass extinction today called Olson’s Extinction which hit land and sea animals hard as well as land plants.  The cause is still a mystery, although climate change has been recently presented as a candidate.  Therapsids then dominated land animals until the Permian extinction, and Olson’s Extinction was arguably that calamity’s first event.

One of Peter Ward’s recent hypotheses is that animals that adapted to the changing conditions, particularly when oxygen levels crashed, survived the catastrophes to dominate the post-catastrophic environment.  In the late Permian, several therapsid lines developed turbinal bones, which may have been used for respiratory water retention in a world where oxygen levels were crashing.[296]  This is a controversial issue, and related to the controversy over when reptiles developed endothermy.  The therapsid ancestors of mammals, cynodonts, first appeared about 260 mya, and had many mammalian features.

The earliest diapsid appeared in the late Carboniferous and looked like a modern lizard.  It also had some canine-type teeth.  Diapsids, however, were marginal animals in the Permian, as that was the time of synapsid and therapsid dominance.  Diapsids would not rise to prominence until the Triassic.

In the oceans, reefs finally began to make a comeback in the late Permian, and the remnants of those reefs can be seen in Texas today.  Tabulate and rugose corals were abundant, as were ammonoids and echinoderms.  Articulate brachiopods (with two shells that can open and close, like a clam’s) were also doing fine.  Fish (ray-finned fish and sharks), however, were the dominant sea animals.  Trilobites were a mere shadow of their former selves, eking out an existence on the seafloor, like the way that nautiloids eked out their existence in deep waters while ammonoids dominated the surface.  And then came the Great Dying. 

The Permian extinction, like the prior major extinctions, was more than one event and had more than one cause.  The Cretaceous extinction is what most people think about when mass extinctions are mentioned (as it was Hollywood-spectacular and ended one fascinating line of animals and paved the way for mammals to dominate), and it led to the existence of humans, but the Permian extinction was the Big One.  Before the taboo against investigating mass extinctions began lifting in the 1970s and 1980s, specialists generally thought that the Permian extinction only impacted the oceans and left terrestrial ecosystems unaffected.  The picture has radically changed since the 1980s, and the terrestrial extinctions are now acknowledged as similarly catastrophic.[297]  The Permian extinction is Earth’s only mass extinction of insects, and although plants are not normally vulnerable to mass extinctions, land plants also barely survived the Permian extinction.  But the extinction came in phases, and each may have had different causes.  There is great ongoing controversy and research regarding the issues. 

The ultimate cause of the Permian extinction was probably the formation of a supercontinent.  When Pangaea finally formed, new dynamics appeared.  One was that there became only one major ocean, the Panthalassic, and the Paleo-Tethys and nascent Tethys oceans were largely landlocked.  Those landlocked smaller oceans would have become like lakes, with little current in them (the Black Sea is the favored analogy today), and the Panthalassic Ocean (from which the Pacific Ocean eventually formed) did not have continents to divert them during their journey from the equator to the poles, so today’s circuitous thermohaline circulation would not have existed, which is shown below.  (Source: Wikimedia Commons) 

The Panthalassic’s currents were slow and lazy, and the deep-water oxygenation of today’s oceans would have been quite different, and perhaps largely ceased to exist.  Also, when supercontinents form, the sea level falls as the oceanic basin expands, and the late Permian's sea levels are thought to be among the lowest in the eon of complex life.  The many shallow seas of complex life’s earlier periods also disappeared with the formation of Pangaea (nearly 90% of the continental shelves became exposed), which were the abode of most marine life.[298]  That new land exposed the swamps and deltas formed in the Carboniferous, and the oxidation of those carbonaceous deposits drew down atmospheric oxygen and increased carbon dioxide.  The merging of continents also results in mountain-building and volcanism.  

Also, the formation of Pangaea (which is controversial regarding what processes led to its formation) may have led to the dynamics that broke it apart.  The Hawaiian Islands are part of a volcanic island chain that began forming more than 80 mya, and is due to a hotspot bubbling up from Earth’s mantle.  Although the issue is far from settled, a prominent hypothesis is that the formation of Pangaea plugged hotspots and prevented heat from venting from Earth’s core, which led to a swelling and fracturing Pangaea.[299]  Part of the evidence for that hypothesis was relatively sudden and widespread volcanism sprouting up around Pangaea, which followed a known fracture pattern around such crustal upwellings.  The volcanism and resultant fracture lines formed today’s continents.[300]  As can be seen in the map of Earth’s landmasses during the late Permian, what became China and Siberia were on the northeast margins of Pangaea, bordering the Paleo-Tethys Ocean, and two volcanic events arising from China and Siberia are currently favored as key proximate causes of the Permian extinctions.

The ecosystems may not have recovered from Olson’s Extinction of 270 mya, and at 260 mya came another mass extinction that is called the mid-Permian or Capitanian extinction, or the end-Guadeloupian event, although a recent study found only one extinction event, in the mid-Capitanian.[301]  In the 1990s, the extinction was thought to result from falling sea levels.[302]  But the first of the two huge volcanic events coincided with the event, in China.  There can be several deadly outcomes of major volcanic events.  As with an eruption in the early 1800s, massive volcanic events can block sunlight with the ash and create wintry conditions in the middle of summer.  That alone can cause catastrophic conditions for life, but that is only one potential outcome of volcanism.  What probably had far greater impact were the gases belched into the air.  As oxygen levels crashed in the late Permian, there was also a huge carbon dioxide spike, as shown by GEOCARBSULF, and the late-Permian volcanism is the near-unanimous choice as the primary reason.  That would have helped create super-greenhouse conditions that perhaps came right on the heels of the volcanic winter.  Not only would carbon dioxide vent from the mantle, as with all volcanism, but the late-Permian volcanism occurred beneath Ediacaran and Cambrian hydrocarbon deposits, which burned them and spewed even more carbon dioxide into the atmosphere.  Not only that, great salt deposits from the Cambrian Period were also burned via the volcanism, which created hydrochloric acid clouds.  Volcanoes also spew sulfur, which reacts with oxygen and water to form sulfurous acid.  The oceans around the volcanoes would have become acidic, and that fire-and-brimstone brew would have also showered the land.  Not only that, but the warming initiated by the initial carbon dioxide spike could have then warmed up the oceans enough so that methane hydrates were liberated and create even more global warming.  Such global warming apparently warmed the poles, which not only melted away the last ice caps and ended an ice age that had waxed and waned for 100 million years, but deciduous forests are in evidence at high latitudes.  A 100-million-year Icehouse Earth period ended and a 200-million-year Greenhouse Earth period began, but the transition appears to have been chaotic, with wild swings in greenhouse gas levels and global temperatures.  Warming the poles would have lessened the heat differential between the equator and poles and further diminished the lazy Panthalassic currents.  The landlocked Paleo-Tethys and Tethys oceans, and perhaps even the Panthalassic Ocean, may have all become superheated and anoxic Canfield Oceans as the currents died.  Huge hydrogen sulfide events also happened, which may have damaged the ozone layer and led to ultraviolet light damage to land plants and animals.  That was all on top of the oxygen crash.  With the current state of research, all of the above events may have happened, in the greatest confluence of life-hostile conditions during the eon of complex life.  A recent study suggests that the extinction event that ended the Permian may have lasted only 60,000 years or so.[303]  In 2001, a bolide event was proposed for the Permian extinction with great fanfare, but it does not appear to be related to the Permian extinction; the other dynamics would have been quite sufficient.[304]  The Permian extinction was the greatest catastrophe that Earth’s life experienced since the previous supercontinent existed in the Cryogenian.[305]

Siberian volcanism (which formed the Siberian Traps) is considered to have been the main event.  The Chinese volcanism of ten million years earlier was a prelude, with other minor events between them, in a series of blows that left virtually all complex life devastated when it finally finished.  To give some perspective on the volcanism's magnitude, when Mount Tambora erupted in 1815 and caused the Year Without a Summer, it is estimated that the eruption totaled 160 cubic kilometers of ejecta.  The Siberian Traps episode lasted a million years and, although it was more of a lava event than an explosion (although there were also plenty of explosions), the total ejected lava is estimated at one-to-four million cubic kilometers.

The Chinese eruption was the preview and it devastated marine environments, and a brief review of the casualties will make it clear.  Tabulate and rugose corals were brought to the brink of extinction, and ammonoids, echinoderms, articulated brachiopods, gastropods, and complex foraminiferans suffered similarly, while fish, bivalves, and small foraminiferans did relatively well.[306]

After the mid-Permian extinction, marine life recovered and there were many radiations to fill empty niches, but coral reefs did not recover.  Between the two big extinction events, extinction levels were highly elevated, which suggests that some of those aforementioned dynamics were still wreaking havoc, with possible cascade effects.  Critics of extinction hypotheses often say: “Correlation is not necessarily causation.”  While there can be great merit to that position, it seems to be overused by various critics.  When the guns are as smoking as volcanic events were, and they often “correlate” with mass extinctions, they are increasingly hard to deny as being at least immediately causative.[307]

The end-Permian extinction correlated rather precisely with the eruption of the Siberian Traps, which continued for a million years and spewed millions of cubic kilometers of basalt.  The end-Permian extinction was the final blow for many ancient organisms.  My beloved trilobites made their final exit from Earth during the end-Permian extinction, as did tabulate and rugose corals, spiny sharks, and the last freshwater eurypterids.  Articulate brachiopods completely vanished from the fossil record, but reappeared in the Triassic via ghost ancestors, but brachiopods never recovered their former abundance and have lived a marginal existence ever since.  Glass sponges and bryozoans disappeared along with the reefs, while complex foraminiferans and radiolarians also vanished, and all of them staged comebacks in the Triassic via ghost ancestors.  Bivalves suffered relatively modestly (“only” about 60% of bivalve genera went extinct) and quickly recovered, fish were barely affected, and gastropods were devastated but quickly recovered.  Ammonoids went through their typical boom-and-bust pattern during the Permian extinctions, while nautiloids kept dwindling but scraped by in their deep-water exile.  In the final tally, more than 95% of all marine species went extinct.  Not only was the death toll tremendous, but the post-Permian oceans were so different from before that the Permian extinction marks the end of an era, which began with the Cambrian Explosion.  The Paleozoic Era ended with the Permian extinction and the Mesozoic Era began.

On land, the devastation was similar.  Again, insects suffered their only mass extinction, and several orders of insects vanished from the fossil record after the Permian; those gigantic flying insects of Paleozoic times also vanished forever.  Permian conifer forests gave way to deciduous forests in the wake of global warming, and early gymnosperms and seed ferns were largely replaced as lycophytes made a comeback in the early Triassic.  The lycophyte radiation in the wake of the Permian extinction is typical of what are called disaster taxa, which are the first organisms to colonize disturbed environments.  Reptiles and amphibians lost nearly two-thirds of their families, which translates to more than 90% of all species.  All large herbivores and predators went extinct, along with gliding reptiles.  In total, the Permian extinctions wiped out about 90-96% of all species, more than 80% of all genera, and nearly 60% of all families.  Nothing else in the history of complex life comes close and puts the Permian extinction in a category all its own. 

Although the overwhelming devastation of the Permian extinction seemed to play no favorites and whatever survived was the luck of the draw, recent research has demonstrated that even with such a catastrophe, certain life forms were more resilient than others, related to biological “buffers” in their life processes.  In marine environments, the warming, anoxia, and acidification would have wiped out species vulnerable to them, and corals were and still are particularly susceptible to those changes.  Those conditions wiped out the corals in the Permian extinction, and they are the first ecosystems being devastated today, with similar conditions of warming, anoxia, and acidification.[308]  Whether it was the ability to move to safer environs or the ability to buffer chemical changes, the more resilient organisms had a better survival rate than others.


The Reign of Dinosaurs

World map in mid-Jurassic (c. 170 mya) (Source: Wikimedia Commons) (map with names is here)

World map in mid-Cretaceous (c. 105 mya) (Source: Wikimedia Commons) (map with names is here)

Chapter summary:

The period following the greatest extinction event ever is called the Triassic (c. 252 to 201 mya).  The Triassic was also the Mesozoic Era’s first period (the other two were the Jurassic and Cretaceous).  The Mesozoic is also known as the Golden Age of Reptiles, but most people think of it as the reign of dinosaurs.  However, dinosaurs did not yet exist when the Triassic began.

There was a “coal gap” in the early Triassic, and depending on the framework and which scientist is asked, it took Earth’s ecosystems 10 million years (when the environment recovered enough to sustain normal ecosystems), 30 million years (when terrestrial ecosystem diversity recovered), or 100 million years (when marine ecosystem diversity recovered) to recover from the Permian extinction.  On land, the forests slowly recovered, and disaster-taxa lycophytes dominated the early Triassic.  Seed ferns dominated the Southern Hemisphere, and palm-tree-resembling cycads and ginkgo trees (which first appeared in the late Permian, of which the living fossil Ginkgo biloba is the only surviving member) also prospered.  In the Triassic’s Northern Hemisphere, on what became North America, Europe, and Siberia, conifer forests recovered and blanketed the land.

From the Permian extinction’s devastation arose a reptilian sheep called Lystrosaurus.  Fossil hunters of early Triassic sediments have been frustrated for many years, as nearly 95% of preserved early Triassic land animal remains are Lystrosaurus, because it was about the Permian extinction’s only land animal survivor.  There has been debate for many years about why it survived when almost nothing else did.  No single animal ever dominated Earth’s land masses as thoroughly as Lystrosaurus did during the early Triassic.  Lystrosaurus was probably a burrower (many have likened Lystrosaurus to a pig because of that burrowing), which may have provided the shelter needed to survive the Permian holocaust.  It may also have been a generalist herbivore and could eat most surviving plants.[309]  But some think that its survival, when almost every other species died, was due to luck.  Luck is a surprisingly common proposed explanation for evolutionary events and outcomes, and some creatures seemed to be in the right place at the right time while others were in the wrong place at the wrong time.  The spread of Lystrosaurus was also aided by two other facts: the land masses formed one continent, so Lystrosaurus could simply walk to dominance of Earth; and few predators capable of eating a Lystrosaurus survived.  One swamp denizen ate Lystrosaurus (being semi-aquatic may have also helped species survive the Permian extinction), as did another carnivore, but not much else did.  Lystrosaurus was a therapsid, as were the dominant land animals before the Permian extinction.

The Golden Age of Lystrosaurus lasted only about a million years before it was displaced by much larger herbivorous reptiles, and diapsids, particularly archosaurs, began displacing therapsids early in the Triassic.  A cynodont descendant, Thrinaxodon, burrowed and was possibly a direct ancestor of mammals.[310]  If it was not our direct ancestor, it was a close cousin to it.  Proto-mammals were displaced and largely driven underground during the Triassic, and many of them resembled rats and other rodents.  About 225 mya, which was about halfway through the Triassic, early mammals first appeared, although there is plenty of fierce controversy over exactly which animal could be called a mammal.[311]  But reptiles starred in the Mesozoic’s tale, dinosaurs in particular.  Mammals were small, marginal creatures, and until the late Mesozoic, they only emerged from their burrows at night to feed.

In Triassic seas, ammonoids recovered from the brink of extinction at the Permian’s end to live in their golden age while still periodically booming and busting.  It took ten million years after the Permian’s end for reefs to begin to recover, and when they did, they were formed by stony corals, which evolved from their tabulate and rugose ghost ancestors.  Stony corals also built today’s reefs.  Bivalves dominated biomes in which brachiopods once flourished, and have yet to relinquish their dominance.  Before the Permian extinction, about two-thirds of marine animals were immobile.  That number dropped to half during the Triassic, ecosystems became far more diverse, and a marine “arms race” began in the late Triassic.  Predators invented new shell cracking and piercing strategies, and prey had to adapt or go extinct.  The few surviving brachiopods and crinoids were driven to ecosystem margins, and the Jurassic and Cretaceous would see the appearance of shell-cracking crabs and lobsters.

The Tethys Ocean grew during the Triassic, and in the Jurassic there were no more island barriers on the Tethys’s east end.  The Paleo-Tethys was finally squeezed out of existence by islands that became part of Eurasia.  The shallow margins of the Tethys became the greatest oil source in Earth’s history.  The Proto-Tethys and Paleo-Tethys oceans also formed oil deposits, but about 70% of the world’s oil deposits initially formed during the Mesozoic’s anoxic events, primarily along the Tethys’s margins.  In the Middle East, Caspian Sea, Western Russia, North Africa, Gulf of Mexico, and Venezuela virtually all of the oil deposits were laid down by dying and preserved organisms along Tethyan shores.  In the early Triassic, along the west end of what became North America, oceanic plate subduction under continental plates initiated a series of volcanic and mountain-building events that continue to this day.  The foundations of the Sierra Nevada mountain range were formed then.  I have spent my fair share of time hiking through them

Low-oxygen Mesozoic oceans saw the rise of unusual biomes.  In methane seeps in the Mesozoic’s global ocean floor, bivalves and brachiopods formed symbiotic relationships with chemosynthetic organisms that digested methane.[312]  All over the world, scientists have been amazed to find rock layers almost entirely comprised of shells of those innovative, low-oxygen surviving shelled animals.[313]

As with cliché images of Carboniferous rainforests that depict giant dragonflies, the cliché dinosaur image has volcanoes in the background (1, 2).  The Mesozoic began and ended with tremendous volcanic eruptions, and major eruptions dotted the Mesozoic.  Those eruptions vented vast amounts of carbon dioxide into the atmosphere and were responsible for the high carbon dioxide levels that dominated the Mesozoic, according to GEOCARBSULF and its subsequent corrections, which made it such a hot era.  Hot seas also do not hold as much oxygen as cold seas, which contributed to the anoxic events that continually visited Mesozoic oceans, particularly the Tethys.  Hot, low-oxygen air is hostile to animal life, and during the Triassic, many reptiles beat the heat by migrating back to the oceans where their ancestors hailed from.[314]  Those seagoing reptiles soon dominated Earth’s oceans in complex life's greatest migration from land to sea.  Ichthyosaurs, which looked like reptilian dolphins, first appeared about 245 mya and survived for about 150 million years.  The ancestors of plesiosaurs also appeared when ichthyosaurs did.  By 215 mya, some ichthyosaurs became gigantic; one species reached more than 20 meters in length and had Earth’s largest eyes ever, at about the size of dinner plates.[315]  Ichthyosaurs hunted the squid’s ancestors (which could become fairly large), Earth’s other big-eyed animals, but feasted on a wide variety of prey as the late Triassic oceans’ apex predators.  Also, a shellfish-eating cousin of plesiosaurs lived in the Triassic.  Aquatic reptiles overcame Carrier’s Constraint, and many aquatic reptiles of the Mesozoic seem to have become warm-blooded and also gave live birth.

So far, this essay has dealt lightly with regional differences and largely confined the discussion to polar, temperate, and tropical conditions in the seas, and rainforest versus dryer conditions on land.  While Pangaea existed, barriers to species diffusion on land were relatively modest, hence Lystrosaurus's dominance.  But Pangaea began to break up at the Triassic’s end, and continental differences in plants and animals often became significant in later times.  Although the formation of Pangaea had profound impacts, because land life was relatively young, the differences and resultant changes due to the removal of oceanic barriers were less spectacular than would happen in the distant future, such as when South America connected to North America.

For an example of how geography impacted early animal evolution, therapsids are thought to have evolved in non-tropical Permian climates.  That non-tropical beginning influenced therapsid evolution and particularly strategies for regulating body temperature.  Therapsids were rather stocky and had short limbs and tails, which is a cold-weather adaptation seen in mammals today.  There is plenty of speculation and research on the issue of therapsid thermoregulation because mammals are the therapsid line’s last survivors.  Diapsids, on the other hand, evolved in warmer climates, were relatively gracile, and had particularly long tails.[316]  That long tail was critical for the appearance of bipedal reptiles, as it shifted their center of gravity over their hips. 

Until my lifetime, scientists thought of dinosaurs as slow and stupid, but that view has changed.  In the 1970s, scientists realized that prior depictions of bipedal dinosaurs such as Tyrannosaurus rex erroneously depicted them with upright postures.  Their actual posture had the tail, spine, and head all on a line largely parallel with the ground.[317]  Not until the release of Jurassic Park did the public begin to see more realistic portrayals of bipedal dinosaur posture.  That posture may have been critical for the success of dinosaurs, as becoming bipedal, with their legs in an upright position under their bodies, allowed them to overcome Carrier’s Constraint.  Also, the notion of overcoming Carrier’s Constraint transformed the view of dinosaurs from lumbering, slow creatures to nimble runners.  The dinosaur line is considered monophyletic, and the first dinosaurs were bipeds.  All quadrupedal dinosaurs re-evolved their four-legged stances from the original bipedal posture, which is obvious in that nearly all quadrupedal dinosaurs had rear legs longer than their front ones.[318] 

The view of dinosaurian intelligence has also changed radically in the past generation, as evidence has been discovered that some dinosaurs were significantly encephalized (particularly the line that led to birds), as well as evidence for parenting and herd behaviors, and pack hunting.[319]  Dinosaurs had the first hands, even with opposable thumbs.[320]  Recent work on encephalization suggests that animals were well on their way toward human-level encephalization hundreds of millions of years ago, and were prevented from attaining it far earlier, such as 70 mya, due to the Permian extinction.  The world might be populated with sentient, civilized, and even space-faring reptiles today if events had played out slightly differently, such as that asteroid missing Earth 66 mya (or technologically advanced dinosaurs preventing its impact).

The direct ancestors of dinosaurs, archosauromorphs, first appeared in the late Permian, and some beleaguered specimens survived into the Triassic as ghost ancestors.  Until recently, the first true dinosaur was widely considered to be Eoraptor, which appeared about 231 mya.  Eoraptor looks like a miniature Tyrannosaurus Rex and in fact is in the terrestrial dinosaur line that culminated with the Lizard King, called theropods.  A study published in 2013, however, made the case that Nyasasaurus, dated to 243 mya, is either the first dinosaur yet discovered or a close cousin to it.[321]  Birds are also probably part of the theropod clade, as the only survivor of that line and the only surviving dinosaurs.  Eoraptor was about a meter long and weighed ten kilograms.  The time from the first diapsids to the first dinosaurs spanned nearly 100 million years, but there was nothing spectacular about them then, as their early years were dominated by amphibians, then synapsids, and then therapsids.  Why dinosaurs rose to prominence has been a source of controversy and debate, but the contending answers are energy-based.

Carrier’s Constraint and the first dinosaurs’ bipedal posture is currently an issue of great interest, as it may explain why dinosaurs prevailed over therapsids.  According to GEOCARBSULF and COPSE, the early Triassic was a period of low oxygen following the Permian crash, down to 15% or so from the early Permian’s 25-35%.  Peter Ward’s hypothesis is that dinosaur ancestors evolved their bipedal posture and overcame Carrier’s Constraint in the Triassic’s low-oxygen environment.[322]  With running no longer interfering with breathing, quick dinosaurs displaced lethargic therapsids in the Triassic.  Even quadrupedal dinosaurs had postures with their legs directly under them, which overcame Carrier’s Constraint.  The standard hypothesis is that speed and stamina allowed dinosaurs to prevail (and their ability to breed in large numbers and quickly grow was a great advantage over mammals[323]), but they also first appeared and increased their spread after another mass extinction event about 230 mya, which may have resulted from volcanism and/or mountain-building in Alaska and along the west coast of Canada, with their attendant climatic effects.[324]  Today, a few competing hypotheses explain the rise of dinosaurs: their superior respiration and speed, their ability to rapidly breed and grow, or their opportunism when a mass extinction at 230 mya eliminated therapsid herbivores and left the biomes open for herbivorous dinosaurs called sauropodomorphs to appear and dominate by the Triassic’s end.[325]  Their probable descendants, the sauropods, are Earth’s largest land animals ever.  The question of why dinosaurs became so large is a central issue today and may well be related to another hot topic: the development of endothermy in dinosaurs. 

Birds are warm-blooded and today’s reptiles are cold-blooded.  Thermoregulation is a vast, complex issue, and warm-bloodedness or cold-bloodedness appears to be a result of evolutionary cost-benefit outcomes.  The first vertebrates that left Earth’s waters often basked, the first dominant reptiles had energy-regulating sails, and therapsids may have at least dabbled in chemical means of internal temperature regulation, although the evidence is thin.[326]  But the evidence for dinosaurian internal temperature regulation is strong, and the surviving therapsid line, the mammals, also developed internal temperature regulation.

The Triassic began hot and ended hot, and the Jurassic and Cretaceous were also hot, so staying warm was not a significant issue for dinosaurs.  Marine reptiles stayed cool by becoming aquatic, and for land-based dinosaurs, features such as Stegosaurus plates apparently replaced the sails of synapsids for both heating and cooling, and like the synapsid sail, those Stegosaurus plates may have also been used for display.[327]  Also, like the cliché, many large herbivorous dinosaurs lived near cooling swamps, although the issue has been controversial.  Cooling swamps and protective water holes that we see in the tropics today were a major aspect of Mesozoic landscapes.  But the thermoregulatory aspect that most work is directed toward today is how dinosaurs kept warm.  There is compelling evidence that dinosaurs regulated their body temperature in myriad ways, including internal chemistry.  All bipedal animals today are endotherms and they all have four-chambered hearts, as dinosaurs did.  Feathers, dinosaurs living near the poles (1, 2), and oxygen-isotope studies of dinosaur bones all support the idea that dinosaurs engaged in internal temperature regulation, but one of the more intriguing areas is that of dinosaur growth.  Like tree rings, bones have seasonal growth rings and they have been read for many dinosaur fossils.  They have been used to determine dinosaurian life expectancies.  Tyrannosaurus rex could live to be about 30, giant sauropods could live to be 50, and smaller dinosaurs, as with smaller mammals, lived shorter lives.  The tiny ones only lived three-to-four years and the mid-sized ones lived seven-to-fifteen years.[328]  Growth rates also provide thermoregulation evidence.  Tyrannosaurs had juvenile growth spurts and largely stopped growing as adults, and sauropods had growth rates equivalent to today’s whales, which are Earth’s fastest growing animals.[329]  But there is also evidence of ectothermic dynamics.  The great size of dinosaurs would have led to relatively easy ways to stay warm, as large animals have a greater mass-to-surface area ratio, like the way in which complex cells overcame the energy generation issue.  Also, in the generally hot Mesozoic times, staying warm would have been fairly easy, particularly for huge dinosaurs. 

As scientists know with mammals, although optimal performance can be attained with endothermy, it comes with a great energetic cost.  As with plants, an animal can spend its energy budget on consumption (metabolism) or investment (growth).  An intriguing hypothesis is that growing large was part of an energy strategy, as the benefits of size (reduced risk of predation, ease of conserving body heat and consequently less need for a high metabolism, ability to access new food sources, such as foliage high above the ground) outweighed their costs (energy devoted to growth instead of metabolism, the need to constantly feed).  Their size and the warm climate meant that large dinosaurs did not need as intense internal energy generation as mammals do, for instance, and dinosaurs may have been mesotherms, with internal energy regulation greater than ectotherms, but not as great as endotherms (mammals and birds).[330] 

In light of GEOCARBSULF's depiction of low Mesozoic oxygen levels, Peter Ward addressed a controversial issue regarding how dinosaurs breathed.[331]  Birds have an air sac breathing system with an inflexible septate lung, which is highly superior to the mammalian alveolar bellows lung.  At 1600 meters elevation, today’s birds are about twice as efficient at extracting atmospheric oxygen as mammals are.  Flying is the most aerobically demanding activity on Earth and a bird’s air-sac breathing system is a primary reason why they can fly, and flying over the Himalayas is an energetic feat far beyond what any mammal can accomplish.  The high-performance respiration that birds possess is also why they live far longer than similarly sized mammals, but is related to their efficient mitochondria.  When a mammal breathes, it inhales oxygenated air and exhales carbon dioxide, but it is not a very efficient system, as fresh and depleted air mix in the lungs.  The air sac system, on the other hand, passes fresh oxygenated air along the lungs with each breath.  One might say that birds constantly inhale.  Animations of the air sac system can help us understand it.  Since birds evolved from dinosaurs, and indeed are dinosaurs, just when this innovation developed is of great interest to paleobiologists.  If the early Mesozoic were the low-oxygen times that GEOCARBSULF depicts, then the air sac system would have been a logical adaptation to oxygen-poor air.

The issue of avian and dinosaurian air sacs and when they evolved has been the focus of a rancorous dispute that was only recently resolved and hinged on the hollow parts of bones, which is a phenomenon called skeletal pneumaticity.  The controversy involved dinosaur bone pneumaticity and how it may have been related to birds.  In a landmark paper in 2005, it was shown that birds have their most important air sacs where nobody thought they were, near a bird’s tail, not its head.  Not only that, pneumatic bones are all related to the air sac system, and birds have the same pneumatic bones as saurischian dinosaurs did.[332]  The obvious implication is that the air sac system evolved in theropods and sauropods, when dinosaurs first appeared.  If the air sac system appeared with the first dinosaurs, it is one more big reason why dinosaurs prevailed over the less respiratorily gifted therapsids.  Such a highly effective respiration system evolving in a low-oxygen environment is a tantalizing hypothesis.

Ornithischians, a great clade of herbivorous dinosaurs, appeared soon after theropods did, but were initially marginal dinosaurs and did not begin becoming abundant until the late Jurassic.  If dinosaurs all have the same common ancestor, ornithischian dinosaurs quickly diverged, with their different hips, and so far, there is no good evidence that ornithischians breathed with the air sac system, and they became the dominant herbivores in the relatively high-oxygen Cretaceous.[333]  The ornithischian advantage was a superior eating system.  Ornithischians were the only dinosaurs that chewed their food.[334]  Chewing squeezes more calories from plant matter and may be why ornithischians surpassed sauropods in the Cretaceous.  Sauropods did not chew their food but had rock-filled gizzards, as birds and reptiles do today.  Sauropods began becoming gigantic in the late Triassic.  Only rare ornithischians without chewing teeth had gizzards.  Sauropods also had the smallest proportional brains of any dinosaur.[335]  The most encephalized dinosaurs were dromaeosaurs, some of which were featured as clever killers in Jurassic Park.  Theropods were the most encephalized dinosaurs, which is an early example of predators having larger brains in order to outsmart their prey.  Ornithopods were in second place only to theropods in encephalization and were among the most successful Cretaceous herbivores.  A fascinating aspect of some ornithopods was their seeming ability to communicate by bugling with a horn in their head’s crest.[336]  This kind of evidence strongly supports the idea of herd behavior in herbivorous dinosaurs.  There is also evidence of a dinosaur stampede, which has been keenly contested (1, 2) in recent years.[337] 

Below are examples of the only three kinds of dinosaurs known.  (Source: Wikimedia Commons)

Long before birds learned to fly, non-dinosaurian reptiles did, and the first pterosaurs flew about 220 mya.  They also had an air sac respiration system.  Although they obviously flew, just how they flew has been controversial.  They were probably warm-blooded, and by the late Cretaceous, pterosaurs became Earth’s largest flying animals ever, with ten-meter wingspans.  Pterosaurs may have been the dinosaurs’ closest relatives.[338] 

The mass extinction at 230 mya coincided with a volcanic event and the initial building of mountains in what became Central Asia.  Ammonoids, bivalves, and other marine denizens were hit hard, and on land it was nearly the final exit for therapsids (cynodonts and dicynodonts), and what would have been the chief diapsid competitor to early sauropods, rhynchosaurs, suddenly went extinct, possibly by losing their food source.  Extinction specialist Michael Benton has argued that the mass extinction at 230 mya was greater in ways than the end-Triassic extinction, which is considered one of the Big Five extinctions.[339]  The rise of dinosaurs to dominance coincided with the mid-Triassic mass extinction, and mammals first appeared a few million years later.  Although the “slate's being cleared” by a mass extinction may well have given dinosaurs their opportunity, they also left many contemporaries far behind.  Mammals would be rat-like, largely nocturnal fringe dwellers for 160 million years after they first appeared, while dinosaurs ruled Earth.  Stony corals also first appeared after the mid-Triassic extinction, and turtles first appeared about 220 mya.

Although the Triassic was a period of great evolutionary novelty (such as a reptile that was mostly neck), and even called an “explosion” in some corners, when air sac lungs, dinosaurs, mammals, modern corals, and flying and marine reptiles appeared, it was not nearly the boom as when mammals rose after the Cretaceous extinctionGEOCARBSULF shows that oxygen levels were low during the Triassic, rebounding a little from the Permian extinction, and then collapsing to perhaps their lowest level of the entire eon of complex life.  Peter Ward proposed that the low oxygen levels during the Triassic and Jurassic kept dinosaurs from “exploding” as mammals did after the Cretaceous extinction.[340]  GEOCARBSULF’s crash of oxygen levels coincides with the end-Triassic extinction at about 201 mya.  The cause of the end-Triassic mass extinction, as with all other extinction events, is debated today, and climate change and volcanic eruptions are among the primary suspects (the volcanic eruptions spewed “only” hundreds of thousands of cubic kilometers of lava as compared to the Permian’s millions), along with rising and falling sea levels.  GEOCARBSULF’s carbon dioxide values show a carbon dioxide spike, which would have caused global warming, as happened during the Permian extinction, and could have triggered methane hydrate vaporization and hydrogen sulfide events.  A recent study makes the similarity explicit between the end-Permian and end-Triassic extinction events, with ominous parallels to current events.[341]  Vented carbon dioxide from volcanic events also made the oceans near shore acidic.  Extensive anoxic events visited the oceans in the late Triassic, particularly along the Tethys’s periphery, and Triassic anoxia formed Southern Iraq’s oldest oil deposits.

The breakup of Pangaea at the Triassic’s end not only initiated volcanic events right in the heart of Pangaea, but the weather systems would have been altered.  In general, the Triassic was a dry period on Pangaea (with some mid-Triassic extinctions possibly related to its becoming wetter on land), and the Jurassic was wetter and had the ubiquitous Mesozoic jungles depicted by Hollywood.

The end-Triassic extinction once again nearly drove ammonoids to extinction and perhaps only one genus survived.  The reefs that began to recover in the late Triassic were again eradicated and did not reappear until more than 10 million years later.  Bivalves, brachiopods, and gastropods lost about half of their genera.  The marine reptile placodonts, which specialized in eating mollusks, went extinct, and plesiosaurs and ichthyosaurs were the marine apex predators to begin the Jurassic.  On land, it was nearly the end for therapsids; afterward, until their final extinction in the early Cretaceous, they were marginal fringe dwellers.  All large terrestrial non-dinosaur archosaurs went extinct and left dinosaurs unchallenged for terrestrial dominance during the Jurassic.

Similar to how reptiles found refuge in the oceans, the crocodile’s ancestors were originally terrestrial archosaurs and found their cooling niche in swampy margins and still do today, even though their cousins (1, 2) went extinct in the end-Triassic event.  Crocodiles have four-chambered hearts like dinosaurs, which suggests that they may have been endotherms/mesotherms that re-evolved ectothermy to better adapt to swamp life.[342]  Only one superfamily of primitive amphibians survived the end-Triassic event for long, and its last surviving member lasted into the Cretaceous in survival enclaves.  It was a giant, at five meters long and 500 kilograms.  Primitive amphibians could not abide the reign of crocodiles, and since the end-Triassic event, amphibians have been almost exclusively modern varieties.  The first salamanders appeared in the late Jurassic and frogs may have first appeared 100 million years earlier, in the late Permian.  Probably spurred by their size in an arms race with dinosaurs, crocodiles became huge, and a Cretaceous species reached twelve meters and eight metric tons; ambushing drinking sauropods and holding their heads under until they drowned was a likely specialty.

Although great mass death resulted from the end-Triassic extinction, dinosaurs emerged virtually unscathed.  Why?  It may have been due to their superior air sac breathing system, which could survive the hot times and record-low oxygen levels of the end-Triassic.[343]  The mammalian lung is pretty good, too, but not nearly as efficient as the saurischian dinosaurs’ air sac system.  Crocodiles have a piston-lung like mammals have, so they also have a superior respiration system.  Mammals rode out the storm in their burrows while crocodile ancestors cooled in the swamps and marine reptiles cooled in the oceans.  Living in burrows, swamps, and other refugia is probably how mammals, crocodiles, and birds survived the end-Cretaceous extinction when non-avian dinosaurs did not.

The end-Triassic event’s final tally was more than 20% of all families, nearly half of all genera, and between 70% and 75% of all species.  Afterward, marine reptiles dominated the oceans, flying reptiles filled the air, crocodile ancestors were the freshwater environment’s apex predators, and dinosaurs reigned in terrestrial environments.

The Jurassic (c. 201 to 145 mya) and Cretaceous (c. 145 to 66 mya) periods spanned the Golden Age of Dinosaurs.  The human fascination with dinosaurs is primarily due to their great size.  They were Earth’s largest land animals ever, by far.  Huge predators hunted even larger herbivores.  Prosauropods, or plateosaurs, were largely bipedal and were the early Jurassic’s dominant herbivorous dinosaurs, but their four-legged descendants, sauropods, supplanted them by the mid-Jurassic and sauropods became Earth’s largest land animals ever.  Some species may have weighed more than 100 metric tons, which would have rivaled the blue whale, which is generally considered to be the largest animal that ever lived.  The blue whale achieved weight primacy, but the sauropods’ vast dimensions are still awe-inspiring.  Some were up to 60 meters in length and could reach 17 meters tall.  Some of the largest sauropods ever lived in the late Jurassic, when they were most numerous, but huge sauropods were plentiful until the Cretaceous extinction.[344]  A prominent hypothesis is that their tremendous size was a strategy for digesting lower-quality food sources; they could digest food for a longer period as it wound its way through their digestive systems.  Their size also discouraged predation and conserved heat.  But their highly efficient air sac breathing system may have been the main reason why they could get so large, particularly in the record-low oxygen Jurassic Period, at least according to GEOCARBSULF

Jurassic sauropods probably subsisted on ferns and the foliage of cycads and conifers, which almost no vertebrates do today, and few animals.  Sauropods had huge guts to ferment those plants.[345]  It would not have been an energy-rich diet.  There has been controversy whether sauropods could rear up on their hind legs, and how they held their heads on their long necks, but the idea that they were primarily swamp-dwellers underwent significant revision.  Today, scientists think that they seem to have sought moist environments, but probably did not spend their lives immersed in water.  They were walking grazers and browsers, and their long necks were probably used for browsing trees.[346]

Sauropods seem to have lived in herds and tended their young.  Until relatively recently, animals as agents of ecosystem change and maintenance was a marginal idea.  But today, sediment burrowing is thought to be a seminal geophysical event in the Cambrian, and those huge sauropods probably had an ecosystem impact like what elephants have today in Africa.[347]  Elephants today break up woods as they feed, as they knock over trees and uproot them.  That damage transforms the biome and provides opportunities for other kinds of herbivores and their predators.  Elephants also create and enlarge water holes and are considered keystone species, which have an outsized impact on their environment.  Today, there is a “loyal opposition” to the overkill hypothesis regarding megafauna extinctions soon after humans appeared; such people minimize the impact of humans (their position has an inherent conflict of interest, as those scholars and scientists are all humans) and attribute the extinction of all elephants of the Western Hemisphere (north, south) to climate change and resulting changes in vegetation.  If the current situation with African elephants is relevant, it is likelier that those vegetation changes were a result of elephant extinction, not a cause.[348]  Elephant extinctions would have affected many other kinds of plants and animals, and could have precipitated cascade effects.  Similarly, those huge sauropods would not just have nibbled at vegetation and been relatively harmless browsers, but their vast bulk would have been ideal for pushing over trees to get at their foliage and other devastations of trees in particular, which would have dramatically impacted biomes.  Giant dinosaurs probably had keystone species impacts on their environments, particularly the vegetation.  Dinosaurs were not the only huge organisms in those days.  The first sequoias appeared in the Jurassic, and would have been immune to dinosaur browsing when they grew large enough.  Below is an artist's conception of a typical Jurassic landscape (just as an allosaur and stegosaur are about to cordially interact).  (Source: Wikimedia Commons)

Ornithischians started slowly and began to become common in the late Jurassic, just when the greatest biological innovation in the past 300 million years began: the appearance of flowering plants, which first bloomed about 160 mya.  Until that time, plant survival strategies included how to avoid being eaten by animals, whether it was bark, height, poisonous foliage, etc.  Flowering plants adopted a different strategy by laying out a banquet for animals.  The primary benefit for plants was spending less energy to reproduce, as well as attracting animals that did not seek to eat the plants and even ended up protecting them.  The advantage for animals was an easily acquired and tasty meal.  It was the greatest direct symbiosis between plants and animals ever, other than plants providing the oxygen that animals breathe, which is inadvertent.  The two primary aspirations that seed plants achieve for successful reproduction are becoming fertilized via pollination and placing seeds where they can become viable offspring (and feces fertilizer could only help).  Flowering plants, also called angiosperms, did not invent animal assistance from whole cloth.  Some Jurassic insects have been found in association with gymnosperm (conifer) cones, and were probably doing the work that the wind previously performed.[349]  Like the enzyme example of a key rattling around in a room, attracting animals to plants, to eat the pollen and nectar, was like a reproductive enzyme: animals carried the key to the lock to initiate reproduction.  Other animals ate the fruit and thereby spread the seeds.  That relationship did not become significant until the mid-Cretaceous.[350]  Angiosperms mature faster and produce more seeds than gymnosperms do.  By the Cretaceous’s end, angiosperms dominated tropical biomes where ferns and cycads used to thrive, and they pushed conifers to the high latitudes, just as they have today.  That tropical dominance is probably related to the insect population, which prefers warm climates.  Angiosperms became Earth’s dominant plants after the end-Cretaceous extinction and comprise more than 90% of plant species today.

There is speculation that dinosaurs invented flowering plants in a coevolutionary dance, as low-browsing ornithischians put pressure on plants to grow and reproduce quickly, and angiosperms are far more effective at those activities than all plants preceding them.[351]  The spread of angiosperms in the mid-Cretaceous coincided with the ornithischians’ rising dominance, and by the end-Cretaceous extinction, they were the most numerous herbivores by far.  Stegosaurs appeared in the late Jurassic and went extinct by the late Cretaceous.

In the late Jurassic, as ornithischians began to become plentiful, a theropod innovation would lead to the only dinosaurs to survive the end-Cretaceous extinction: birds.  As with synapsid sails, stegosaur plates, and a Triceratops’s horns and frill, feathers had a display function as well as thermoregulation, long before they were used to fly.  Ever since scientists realized that dinosaurs were closely related to birds, they have watched for feathers, and have found more than 20 genera of dinosaurs that sported feathers.[352]  That famous Archaeopteryx fossil discovered in 1860-1861 began the speculation that birds evolved from dinosaurs, and was considered one of the first confirmations of Darwin’s theory of evolution.  Today, scientists strongly doubt that Archaeopteryx flew, and it is not considered a direct ancestor of today’s birds.[353]  Feathered dinosaurs existed before Archaeopteryx’s 155 mya appearance, and they are in the clade that led to today’s birds, which first appeared about 160 mya.  Birds probably did not fly much, if at all, until the Cretaceous, and the first beaked birds appeared in the early Cretaceous.

When birds began to fly, their energy requirements skyrocketed.  Today’s bats, for instance, burn several times as many calories as similarly sized non-flying mammals and live several times longer, just as birds live far longer than similarly sized mammals.  Mammalian life-expectancy follows a curve in which size, metabolism, and longevity are all closely related.  The general rule is that all mammals have about the same number of heartbeats in a lifetime.  A mouse’s heart beats about 20 times as fast as an elephant’s, and an elephant lives about 20 times as long as a mouse.[354]  Larger bodies mean slower metabolisms, or less energy burned per unit of time per cell.  Birds have the same kind of size/metabolism/life-expectancy curve, but it sits on a higher level than mammals'.  A pigeon lives for about 35 years, or 10 times as long as a similarly sized rat.[355]  On average, birds live three-to-four times as long as similarly sized mammals.

Because of the stupendous energy demands of flight, birds not only have the superior air sac system for breathing, but their mitochondria, the cell’s energy-generation centers, are far more efficient than mammalian mitochondria.  Parrots in captivity can live to be 80, scientists have noted an albatross in the wild reproducing at more than 60, and scientists may discover that wild albatrosses live to be 100 or more, when their tagging programs get that old.  The mitochondrial theory of aging may explain bird longevity, as the efficient mitochondria of birds produced fewer free radicals.[356]  The theory is controversial and will be for many years, but I think that an engine analogy can help.  A bird is a piece of high-performance biological technology, and when operating at peak output it puts all land-bound animals to shame.  But a bird’s metabolism is usually in its slack state, only maximized during flight.  Simply put, a bird has a great energy capacity that is rarely used to its fullest.  It is like a high-performance engine that rarely runs near its redline.  Such engines will last far longer than those regularly running near redline.  High-performance technology that usually “loafs” in its slack state and is rarely taxed is expensive and long-lasting.  The increased investment in superior technology allows for high performance and long life.  High-quality technology is more economical in the long run, if the initial investment can be afforded.

Recognizably modern birds existed by the end-Cretaceous, and modern birds were the only dinosaurs to survive the end-Cretaceous extinction.  Small pterosaurs called pterodactyls first flew about 150 mya, about the time that birds appeared.  The skies were getting crowded by the late-Cretaceous, although birds and pterosaurs seem to have inhabited different niches.  Modern birds survived the end-Cretaceous extinction partly because they found refugia in swampy margins, burrows, and holes in trees, such as those that woodpeckers can create.

Another energy-related activity probably appeared on a large scale during the reign of dinosaurs: territoriality.  Although territoriality can be observed in insects, fish, crustaceans, amphibians, and reptiles today, it is most common among birds and mammals.  Territoriality is primarily about preserving an animal’s energy base from competition, and it is usually a behavior oriented toward others of the same species, which would eat the same food resources and mate with the same potential partners.  Just as what scientists call consciousness seems to have appeared with the earliest animals, territorial behavior may go all the way back to the Cambrian Explosion.  But the social behaviors apparent in dinosaurs probably also meant territorial behavior, and probably on a scale never experienced before on Earth.  Even the suspected display function of synapsid sails implies territorial behavior.  All great apes are territorial, and human political units such as nations are little more than ape territoriality writ large, as peoples protect their energy and mating bases.  In light of the display common in today’s birds (with its apotheosis in the peacock, although, as usual, there are competing hypotheses), and the phenomenon perhaps goes all the way back to synapsids, along with the discovery of dinosaurian mass nesting sites, herd behaviors, and the like, many scientists believe that dinosaurs were territorial.

In the late Jurassic, armored stegosaurs and ankylosaurs first appeared and used an ornithischian defensive strategy that ceratopsians also developed in the early Cretaceous, which reached its peak with Triceratops in the late Cretaceous.  Today’s rhinoceros is the mammalian equivalent of Triceratops, but today’s rhinos do not have to face anything as fearsome as Tyrannosaurus Rex, although the most successful predators in Earth’s history, humans, are driving rhinos to extinction.

The Tethys Ocean was fully formed in the Jurassic and the continents began to break up in earnest, which led to rising sea levels.  The shallow seas that began to reappear in the Triassic became widespread in the Jurassic as continental shelves were submerged.  The Atlantic Ocean began forming in the Jurassic, as North America, Africa, and South America split, and the world-circling Panthalassic Ocean became the Pacific Ocean about the same time, although it is more of a convention among geologists than any dramatic change.  Australia began to split from Antarctica during the Jurassic.  Mountain-building events along the west coast of North America continued unabated, and the Andes Mountains, which began forming in the Triassic, continued their development in the Jurassic

In the middle Jurassic, the largest bony fish ever, Leedsichthys, a filter feeder, lived.  It reached nearly 20 meters in length.  Scientists have long argued over how other leviathans of Jurassic oceans, such as plesiosaurs, lived.  Scientists have proposed several hypotheses to explain the function of their anatomy. 

The mid-Jurassic marked the beginning of a 160-million-year period of anoxic events that produced most of Earth’s oil deposits, and they finally ended in the Oligocene.  The anoxia of post-Triassic Mesozoic oceans seems to be at least partly the result of increased runoff from land spurred by volcanic events, combined with warm, stagnant, stratified surface waters.[357]  Low atmospheric oxygen, combined with high nutrient runoff and warm waters that absorb less oxygen than cold water, provided the conditions for those anoxic events, and atmospheric oxygen levels only increased toward modern levels in the Cretaceous.  Also, changing currents (including upwelling, which usually brings nutrients to the surface) and rising sea levels (which can make the seafloor anoxic) may have contributed to the unprecedented and never reproduced anoxia of those times.  Until the current low-oxygen events that humans are inducing, anoxic events, and hence oil formation, have not occurred much during the past 30 million years.[358]

About 183 mya, an extinction event linked to anoxic and volcanic events hit ammonoids hard, as usual.  The extinction seems to have been confined to the oceans.[359]  Along with the appearance of carbonate hardgrounds, reefs slowly recovered in the Jurassic, and by the Jurassic’s end, coral reefs lined Tethyan shores.  Low-oxygen tolerating marine animals proliferated in the Jurassic.  Ammonoids, with their superior respirational equipment, developed large, thin-shelled varieties that housed the large gills probably required to navigate the Jurassic’s low-oxygen waters.[360]  Also, a different kind of cephalopod, the ancestor of squids, became plentiful in the Jurassic.  The first crabs appeared in the Jurassic, and they also developed a superior respiration system; they put their gills within their armor and developed a pump gill.[361]  As most seashore visitors know, crabs are quite tolerant of exposure to air, much as nautiloids suffer no ill effects when exposed to air for a short time.  Crabs proliferated with the late Jurassic’s reefs, to only collapse with the end-Jurassic reef collapse (called the Tithonian event, or end-Jurassic extinction), which was caused by a sudden drop in sea levels, and the extinction again appeared to be largely restricted to marine biomes.[362]  On land, there were extinctions of sauropods, stegosaurs, and advanced ornithopods.[363]

The sea level drop quickly reversed in the early Cretaceous, and the Cretaceous (c. 145 to 66 mya) saw the most dramatic rise in global ocean levels during the eon of complex life.  At the sea level’s peak, the land’s surface area during the Cretaceous was about two-thirds of today’s (18% versus today’s 29% of surface coverage).  By the early Cretaceous, today’s continents were recognizable, and for the first time ever, marked regional differences appeared among the terrestrial animals that inhabited continental biomes.  Sauropods generally stayed in the southern continents and ornithischians came to dominate the northern continents, and theropods also became quite diverse in the late Cretaceous.  The iconic theropod and most famous dinosaur, T-rex, appears to have solely been a North American resident.  Earth’s fossil record for dinosaurs is richest in North America (with China and Mongolia coming in second), so the fossil record may be biased toward northern dinosaurs.[364]  Today, there are only about 100 professional dinosaur paleontologists on Earth; that is not a very large community.  To most six-year-old boys, those scientists won the lottery, as they are paid to study dinosaurs and dig their fossils from the ground.  In T-rex’s northern range, Triceratops was the dominant herbivore, and its confrontations with T-rex may have been Earth’s greatest land battles ever, at least until humans appeared.  In T-rex’s southern range, North America’s largest dinosaur, a gigantic sauropod, lived.[365]

As land’s surface area shrank, the continents became wetter, as all land became relatively close to the oceans.  In the late Jurassic there was a cooling period, the coldest time of the entire Mesozoic, with even some mountainous and polar glaciation, but end-Jurassic volcanism kept carbon dioxide levels high and the climate warmed.  Warm-climate plants lived within 15 degrees of the South Pole during the Cretaceous, and forest went all the way within five degrees of the poles, which has fascinated scientists as they try envisioning a biome which was in the dark for nearly half the year.[366]  The Cretaceous was generally a hot, wet time on Earth.

India broke away from Gondwana in the early Cretaceous, and Gondwana's breakup beginning about 150 mya is generally considered the birth of the Indian Ocean.  By the Cretaceous’s end, India was alone and swiftly moving toward Southern Asia and a tremendous collision that formed the Himalayan Mountains and Tibetan Plateau.  The Andes were uplifted during the Cretaceous, and mountain-building events (1, 2) continued in western North America.  In the late Cretaceous, the Rocky Mountains began their rise and the volcanic hotspot that created the volcanic mountain chain that is currently represented by the Hawaiian Islands first appeared.  In the late Cretaceous, the Tethys Ocean connected with the Pacific and created a world-circling tropical current, which helped gentle and warm Earth’s weather systems, and contributed to anoxic events.  North America’s Great Plains were under a shallow sea in the Cretaceous. 

Calcareous plankton appeared in the Mesozoic and required oxygen to form calcium carbonate.  They became so abundant in the high oxygen of the late Cretaceous that the rain of their bodies on ocean floors gave the Cretaceous its name: chalk (the Latin name).[367]  Calcium carbonate, the primary constituent of limestone, comes in two forms: calcite and aragonite.  The magnesium content in the oceans, as well as the ocean temperature, determines which form of calcium carbonate will dominate.  The Permian extinction also marked the end of a 100-million-year ice age and gave way to about 200 million years of hot times.  During the eon of complex life, Earth has vacillated between icehouse and greenhouse conditions.  That pattern also seems related to supercontinent dynamics.  Hot seas are generally calcite seas and cold seas are usually aragonite seas.  Calcite seas create carbonate hardgrounds, which influence the biome that forms.  The Ordovician and Silurian periods had vast carbonate hardgrounds, which disappeared during the Karoo Ice Age and returned in the Greenhouse Earth age of dinosaurs, becoming common in the Jurassic.  Today’s Icehouse Earth has aragonite seas, so organisms that form calcium carbonate shells use aragonite, which is less stable than calcite and its formation is sensitive to temperature and acidity.  Coral reefs, key phytoplankton (which help produce Earth’s oxygen), and shellfish use aragonite today to form their shells.  There is already strong evidence that acidification of the oceans due to humanity’s burning of fossil hydrocarbon deposits to power the industrial age is interfering with the ability of coral, carbonate-forming phytoplankton, and shellfish to form their shells.  That is only one of the industrial age’s many deleterious ecosystem impacts.  The current aragonite-formation situation is not a theoretical construct of fearful environmentalists, but is a measurable impact today.

According to GEOCARBSULF, oxygen levels rose in the Cretaceous and reached nearly modern levels by the end.  But anoxic events also dotted the Cretaceous, probably related to rising sea levels.  The largest bivalve ever lived in the Cretaceous and reached three meters in length.  It was a deep-water species that probably formed symbiotic relationships with chemosynthetic organisms, along with those other low-oxygen Mesozoic bivalves, and it went extinct as oxygen levels rose in the atmosphere and probably also in the seas.[368] 

When sea levels rise as dramatically as they did in the Cretaceous, coral reefs will be buried under rising waters and the ideal position, for both photosynthesis and oxygenation, is lost, and reefs can die, like burying a tree’s roots.  About 125 mya, reefs made by rudist bivalves, which thrived on carbonate hardgrounds, began to displace reefs made by stony corals.  They may have prevailed because they could tolerate hot and saline waters better than stony corals could.  About 116 mya, an extinction event happened, probably caused by volcanism, which temporarily halted rudist domination.  But rudists flourished until the late Cretaceous, when they went extinct, perhaps due to changing climate, although there is also evidence that the rudists did not go extinct until the end-Cretaceous event.  Carbon dioxide levels steadily fell from the early Cretaceous until today, temperatures fell during the Cretaceous, and hot-climate organisms gradually became extinct during the Cretaceous.  Around 93 mya, another anoxic event happened, perhaps caused by underwater volcanism, which again seems to have largely been confined to marine biomes.  It was much more devastating than the previous one, and rudists were hit hard, although it was a more regional event.  That event seems to have nearly spelled the end of ichthyosaurs, and a family of competing plesiosaurs also went extinct.  On land, spinosaurs, some of which seem to have specialized in eating fish, also went extinct.  There had been a decline in sauropod and ornithischian diversity before that 93 mya extinction, but it subsequently rebounded.  In the oceans, biomes beyond 60 degrees latitude were barely impacted, while those closer to the equator were devastated, which suggests that oceanic cooling was related.[369]  GEOCARBSULF shows rising oxygen and declining carbon dioxide in the late Cretaceous, which reflected a general cooling trend that began in the mid-Cretaceous.  Among the numerous hypotheses posited, late Cretaceous climate changes have been invoked for slowly driving dinosaurs to extinction, in the “they went out with a whimper, not a bang” scenario.  However, it seems that dinosaurs did go out with a bang.  A big one.  Ammonoids seem to have been brought to the brink with nearly all marine mass extinctions during their tenure on Earth, and it was no different with that late-Cretaceous extinction.  Ammonoids recovered once again, and their largest species ever lived in the late Cretaceous, but the end-Cretaceous extinction marked their final appearance as they went the way of trilobites and other iconic animals.

Sauropods were high grazers that ate tree ferns, cycads, and conifers as their staple.  The dramatic radiation of ornithischians in the late Cretaceous coincided with the spread of angiosperms, and their chewing ability continually improved.  Insects also dramatically diversified, as did birds and mammals, in an epochal instance of coevolution between plants and animals.[370]  Hive insects (bees, wasps, termites, and ants) began their rise when flowering plants did. 

Shell-cracking lobsters first appeared in the early Cretaceous.  By the late Cretaceous, mosasaurs became the dominant marine predators.  Ichthyosaurs went extinct after 150 million years of existence, and plesiosaurs declined.  Those apex predators preyed on squids as large as today’s and sharks and ray-finned fish always seemed to do well.  Some substantial sharks appeared in the mid-Cretaceous that even preyed on mosasaurs and plesiosaurs.  The largest sea turtles yet recorded lived in the late Cretaceous, at four meters long and two metric tons.

In the 19th century, the Jurassic was called the Golden Age of Dinosaurs, but that moniker is arguably most applicable to the late Cretaceous, and it was a golden age clear up until a bolide impact brought it all to an end.[371]  One of the uglier disputes in paleontology’s history was a race in the late 19th century between two Americans bent on outcompeting each other in finding and describing dinosaur fossils.[372]  However, the dinosaur extinction is probably the largest and most contentious controversy in the history of paleontology.  Again, the subject of mass extinctions was taboo, due to Lyell’s and Darwin’s prevailing uniformitarianism, until my lifetime.  The hypothesized bolide event, first proposed in 1980, was a kind of a bolide event inflicted on paleontology.  Acrimonious disputes ignited that still burn, but it made studying mass extinctions respectable.  Initially attacked and dismissed, the bolide impact hypothesis is by far today’s leading hypothesis for explaining the end-Cretaceous extinction.[373]  However, at the same time, India was speeding toward its Asian destiny, and its movement is associated with a huge volcanic event that created the Deccan Traps.  Also, sea levels seesawed during the Cretaceous’s end, so the bolide event has some theoretical competition as a causative agent.

It is probably safe to say that if the end-Cretaceous extinction had multiple causes, none of the pre-human mass extinctions can be attributed to just one cause.  However, the sudden disappearance of all non-avian dinosaurs, and what survived, casts a heavy vote for the bolide hypothesis.  Also, there may have been multiple impacts, similar to how the Shoemaker-Levy 9 comet fragmented before it plowed into Jupiter.  Dinosaurs were all terrestrial and were either herbivores or ate herbivores.  The largest bolide impact obviously hit North America the hardest, T-rex would have been among the first casualties, and it would have created an artificial “winter” lasting at least a few months, which might have followed the greatest fires in Earth’s history.  All photosynthetic organisms would have been devastated, as well as the food chains that relied on them.  That alone can explain the end of non-avian dinosaurs, but it also helps explain what survived.  Ammonoids were lightweight versions of nautiloids that lived near the ocean’s surface.  Nautiloids had retreated to deep waters hundreds of millions of years earlier, they lay eggs that take a year to hatch, and they lay them in deep water.  All ammonoids went extinct in the end-Cretaceous event, which ended a 300-million-year-plus tenure on Earth, and all marine reptiles disappeared, too.  Rudist bivalves were in decline before the extinction, probably related to the sea level changes, but it is looking like they lasted until the bolide event.  They were all dependent on primary-production food chains that would have been interrupted by the “bolide winter,” for those that survived the initial conflagration, and they all went extinct.  However, a year after the disaster, when the smoke and dust was clearing, out hatched nautiloids that had been safe in their eggs the entire time, and nautiloids are still with us.[374]  Sharks would have feasted on dead beasts; both aquatic animals and carcasses washed into the oceans by tsunamis.

Most plants produce seeds, which would have largely survived the catastrophe and began growing when conditions improved.  Ferns came back first, in what is called a fern spike, as ferns are a disaster-taxon.  Crocodiles, modern birds (which included ducks at the time), mammals, and amphibians also survived, and all could have found refuge in burrows, swamps, and shoreline havens, lived in tree holes and other crevices that they were small enough to hide in, and all could have eaten the catastrophe’s detritus.  In general, freshwater species fared fairly well, especially those that could eat detritus.  Also, the low-energy requirements of ectothermic crocodiles would have seen them survive when the mesothermic/endothermic dinosaurs starved.  The primary determinants seem to have been what could survive on detritus or energy reserves and what could not, and what could find refuge from the initial conflagration.  While there may have been some evidence of dinosaur decline before the end-Cretaceous extinction (it was gradually growing colder), and the Deccan Traps may have caused at least some local devastation, the complete extinction of non-avian dinosaurs, ammonites, marine reptiles, and others that would have been particularly vulnerable to the bolide event’s aftermath has convinced most dinosaur specialists that the bolide impact alone was sufficient to explain the extinction and no other hypothesis explains the pattern of extinction and survival that the bolide hypothesis does.[375]  In general, the key to surviving the end-Cretaceous extinction was being a marginal species, and all of those on center-stage paid the ultimate price.  The end-Cretaceous extinction's toll was nearly 20% of all families, half of all genera, and about 75% of all species, and marked the end of an era; the Mesozoic ended and made way for the Age of Mammals, also called the Cenozoic, which used to have the Biblically inspired title of the Tertiary.

With the success of the end-Cretaceous bolide hypothesis, there was a movement in some circles to explain all mass extinctions with bolide events, particularly the Permian extinction.  If bolide events were responsible for all mass extinctions, then the periodic, galactic explanation might still have relevance.  Even though an end-Permian bolide event was unveiled with great fanfare and media attention in 2001, it does not appear to be a valid extinction hypothesis today, and invoking bolide impacts to explain every mass extinction seems to have been a passing fad that has seen its best days.[376]  The oxygen hypothesis for explaining extinctions, evolutionary novelty, and radiations is similarly called a current fashion in some circles, and time will tell how the hypothesis fares, although it seems to have impressive explanatory value.


The Age of Mammals

World map in early-Eocene (c. 50 mya) (Source: Wikimedia Commons) (map with names is here)

World map in early-Miocene (c. 20 mya) (Source: Wikimedia Commons) (map with names is here)

Chapter summary:

As smoke cleared and dust settled, literally, from the cataclysm that ended the dinosaurs’ reign, the few surviving mammals and birds crept from their refuges, seeds and spores grew into plants, and the Cenozoic Era began, which is also called the Age of Mammals, as they have dominated this era.  The Cenozoic’s first period is the Paleogene, which ran from about 66 mya to 23 mya.  As this essay enters the era of most interest to most humans, I will slice the timeline a little finer and use the geological time scale concept of epochs.  The Paleogene’s first epoch is called the Paleocene (c. 66 to 56 mya).

Compared to the recovery from the mass extinctions that ended the Devonian, Permian, and Triassic periods, the recovery from the end-Cretaceous extinction was relatively swift.  The seafloor ecosystem was fully reestablished within two million years.[377]  But the story on land was spectacularly different.  By the Paleocene’s end, ten million years after the end-Cretaceous event, all mammalian orders had appeared in what I will call the “Mammalian Explosion.”  While the fossil record for Paleocene mammals is relatively thin, the Mammalian Explosion is one of the most spectacular evolutionary radiations on record.[378]  Because of its younger age, the Cenozoic Era’s fossil record is generally more complete than those of previous eras.

So far in this essay, mammals have received scant attention, but the mammals’ development before the Cenozoic is important for understanding their rise to dominance.  The therapsids that led to mammals, called cynodonts, first appeared in the late Permian, about 260 mya, and they had key mammalian characteristics.  Their jaws and teeth were markedly different from those of other reptiles; their teeth were specialized for more thorough chewing, which extracts more energy from food, and that was likely a key aspect of ornithischian success more than 100 million years later.  Cynodonts also developed a secondary palate so that they could chew and breathe at the same time, which was more energy efficient.  Cynodonts eventually ceased the reptilian practice of continually growing and shedding teeth, and their specialized and precisely fitted teeth rarely changed.[379]  Mammals replace their teeth a maximum of once.  Along with tooth changes, jawbones changed roles.  Fewer and stronger bones anchored the jaw, which allowed for stronger jaw musculature and led to the mammalian masseter muscle (clench your teeth and you can feel your masseter muscle).  Bones previously anchoring the jaw were no longer needed and became bones of the mammalian middle ear.[380]  The jaw’s rearrangement led to the most auspicious proto-mammalian development: it allowed the braincase to expand.  Mammals had relatively large brains from the very beginning and it was probably initially related to developing a keen sense of smell.  Mammals are the only animals with a cerebral cortex, which eventually led to human intelligence.  As dinosaurian dominance drove mammals to the margins, where they lived underground and emerged to feed at night, mammals needed improved senses to survive, and auditory and olfactory senses heightened, as did the mammalian sense of touch.  Increased processing of stimuli required a larger brain, and brains have high energy requirements.  In humans, only livers use more energy than brains.[381]  Cynodonts also had turbinal bones, which suggest that they were warm-blooded.  Soon after the Permian extinction, a cynodont appeared that may have had a diaphragm; it was another respiratory innovation that served it well in those low-oxygen times, functioning like pump gills in aquatic environments.

Further along the evolutionary path, here are two animals (1, 2) that may be direct ancestors of mammals; one herbivorous and the other carnivorous/insectivorous.  They both resembled rats and probably lived in that niche as burrowing, nocturnal feeders.  Mammaliaformes included animals that were probably warm-blooded, had fur, and nursed their young, but laid eggs, like today’s platypus.  Nursing one’s offspring is the defining mammalian trait today, but there has been great controversy over just which mammaliaformes are mammals’ direct ancestors and which one can be called the first mammal.[382]  According to the most commonly accepted definition of a mammal, the first ones appeared in the mid-Triassic, about 225 mya, nearly 20 million years after dinosaurs first appeared.  The only remaining therapsids after a mass extinction at 230 mya were small (the largest was dog-sized), including the mammalian clade, and archosaurs dominated all Earthly biomes from that extinction event until the end-Cretaceous extinction. 

Dinosaurs fortunately never became as small as typical Mesozoic mammals, or else mammals might have been out-competed into extinction.  Mammals stayed small in the Mesozoic.  The largest Mesozoic mammal yet known was raccoon-size, and its diet included baby dinosaurs.  Dinosaurs returned the favor, and digging up mammals from their burrows to snack on them is known dinosaurian behavior.[383]

The issue of early mammalian thermoregulation is controversial and unsettled; even today, mammals engage in a wide array of thermoregulatory practices.  Today’s primitive mammals have lower metabolic rates than modern ones.  Therapsids did not overcome Carrier’s Constraint as dinosaurs did; they were not high-performance animals.  However, early mammals did not see the Sun and their larger brains required more energy.  Early mammals probably were endothermic, but the condition may have included regular torpor, when they went into a brief “hibernation” phase, and their active body temperature may have been several degrees Celsius lower than today’s modern mammals.  Birds and mammals are often born without endothermy but develop it as they grow.[384]  Mammals solved Carrier’s Constraint when they adopted erect postures in the early Jurassic.[385]

Mammalian reproductive practices separate them into their primary categories.  Some “primitive” mammals still lay eggs.  The first placental mammal appeared about 160 mya, the marsupial split began about 35 million years later, and the first true marsupial appeared about 65 mya.  The marsupial/placental “decision,” as with many other lines of evolution, seems to have been a cost-benefit one rooted in energy.  Marsupials have far less energy invested in their young at birth than placentals do.  Marsupials and birds readily abandon their offspring when hardship strikes.  Placentals have a great deal more invested in giving birth to offspring and are therefore less likely to “cut their losses” as easily as birds and marsupials do.[386]  In certain environments, marsupials had the advantage over placentals.  The earliest known marsupial-line mammal appeared in China 125 mya, and marsupials and placentals co-existed on the fringes.  From there they migrated to North America and then to South America.  About the time of the end-Cretaceous holocaust, South America separated from North America, but South America was still connected to Antarctica.  About 50 mya, marsupials crossed from Antarctica to Australia, perhaps by crossing a narrow sea, and placental mammals died out in Australia, probably outcompeted by marsupials.  Earth’s only egg-laying mammals today live in New Guinea, Australia, and Tasmania.  An entire order of early mammals, which were like marsupial and monotreme rodents, existed for about 120 million years, longer than any other mammalian lineage, to only go extinct in the Oligocene, probably outcompeted by rodents.  They were probably the first mammals to disperse nuts and were probably responsible for a great deal of coevolution between nut trees and animals.[387]  All living marsupials have ancestors from South America.  In North America and Eurasia, marsupials died out, probably outcompeted by placentals.  Africa was not connected to any of those landmasses during those times and thus never hosted marsupials.  In South America, marsupials and birds were apex predators (1, 2), but a diverse and unique assemblage of placental ungulates flourished in South America during about 60 million years of relative isolation from all other landmasses.

As with the origins of animals, the molecular evidence shows that virtually all major orders of mammals existed before the end-Cretaceous extinction.  The Paleocene‘s Mammalian Explosion appears to have not been a genetic event, but an ecological one; mammals quickly adapted to empty niches that non-avian dinosaurs left behind.[388]  The kinds of mammals that appeared in the Paleocene and afterward illustrate the idea that body features and size are conditioned by their environment, which includes other organisms.  With the sauropods' demise, high grazers of conifers never reappeared, but many mammals developed ornithischian eating habits and many attained similar size.  That phenomenon illustrates the ecological concept of guilds, in which assemblages of vastly different animals can inhabit similar ecological niches.  The guild concept is obvious with the many kinds of animals that formed reefs in the past; the Cambrian, Ordovician, Silurian, Devonian, Permian, Triassic, Jurassic, and Cretaceous reefs all had similarities, particularly in their shape and location, but the organisms comprising them, from reef-forming organisms to reef denizens and the apex predators patrolling them, had radical changes during the eon of complex life.  If you squinted and blurred your vision, most of those reefs from different periods would appear strikingly similar, but when you focused, the variation in organisms could be astounding.  The woodpecker guild is comprised of animals that eat insects living under tree bark.  But in Madagascar, where no woodpeckers live, a lemur fills that niche, with a middle finger that acts as the woodpecker’s bill.  In New Guinea, a marsupial fills that role.  In the Galapagos Islands, a finch uses cactus needles to acquire those insects.  In Australia, cockatoos have filled the niche, but unlike the others, they have not developed a probing body part, nor do they use tools, but just rip off the bark with the brute force of their beaks.[389]

After the dinosaurs, empty niches filled with animals that looked remarkably like dinosaurs, if we squinted.  Most large browsing ornithischians weighed in the five-to-seven metric ton range.  By the late Paleocene, uintatheres appeared in North America and China and attained about rhinoceros size, to be supplanted in the Eocene by larger titanotheres, and in Oligocene Eurasia lived the largest land mammals of all time, including the truly dinosaur-sized Paraceratherium.  The largest yet found weighed 16 metric tons and was about five meters tall at the shoulders and eight meters in length.  Even a T-rex might have thought twice before attacking one of those.  It took about 25 million years for land mammals to reach their maximum size, and for the succeeding 40 million years, the maximum size remained fairly constant.[390]  Scientists hypothesize that mammalian growth to dinosaurian size was dependent on energy parameters, including continent size and climate, and cooler climates encouraged larger bodies

Huge mammals persist to this day, although the spread of humans was coincident with the immediate extinction of virtually all large animals with the exception of those in Africa and, to a lesser extent, Asia.  The five-to-seven-metric-ton browser formed a guild common to dinosaurs and mammals, and is probably related to metabolic limits and the relatively low calorie density that browsing and foraging affords.[391]  Sometimes, the similarity between dinosaurs and mammals could be eerie, such as ankylosaurs and glyptodonts, which is a startling example of convergent evolution, which is the process by which distantly related organisms develop similar features to solve similar problems.  They were even about the same size, at least for the most common ankylosaurs, which were about the size of a car.  Ankylosaurs appeared in the early Cretaceous and succeeded all the way to the Cretaceous’s end.  Glyptodonts appeared in the Miocene and prospered for millions of years.

The Cenozoic equivalent of a bolide impact was the arrival of humans, as glyptodonts went extinct with all other large South American megafauna shortly after human arrival.  The largest endemic South American animals to survive the Great American Interchange of three mya, when North American placentals prevailed over South American marsupials, and the arrival of humans to the Western Hemisphere beginning less than 15 kya, are the capybara and giant anteater, which are tiny compared to their ancient South American brethren.  The giant anteater is classified as a sloth, and sloths were a particularly South American animal.  The largest sloths were bigger than African bush elephants, which are Earth’s largest land animals today.  After car-sized glyptodonts went extinct, dog-sized giant armadillos became the line’s largest remaining representative.

Among herbivores, their mode of digestion was important.  Hindgut fermenters attained the largest size among land mammals, and elephants, rhinos, and horses have that digestive process.  Cattle, camels, deer, giraffes, and many other herbivorous mammals are foregut fermenters and many are ruminants, which have four-chambered stomachs, while the others have only three chambers.  While foregut fermenters are more energy efficient, hindgut fermenters can ingest more food.  Hindgut fermenters gain an advantage when forage is of low quality.  What they lack in efficiency they more than make up for in volume.  There are drawbacks to that advantage, however, such as when there is not much forage or its quality is poor, such as dead vegetation.  A cow, for instance, digests as much as 75% of the protein that it eats, while a horse digests around 25%.  Live grass contains about four times the protein as dead grass.  Cattle can subsist on the dead grass of droughts or hard winters and horses cannot, which was a tradeoff in pastoral societies.[392]

Angiosperms began overtaking gymnosperms in the early Cenozoic, but it did not immediately happen.  In Paleocene coal beds laid down in today’s Wyoming, gymnosperms still dominated the swamps, and the undergrowth was mainly comprised of ferns and horsetails.[393]  But angiosperms were on their way to dominance, and mammals, birds, and insects began major adaptations to them.

The present consensus is that primates appeared in the late Cretaceous between 85 mya and 65 mya, perhaps in China, but the earliest known primate fossils are from the late Paleocene around 55 mya and were found in Northern Africa.  The first primates were tree-dwellers that ate insects, nectar, seeds, and fruit.  Their eyes point forward (they rely on sight more than other senses, and have pronounced binocular vision), and most have opposable digits on their hands/feet, which are ideal for canopy-living.  Primates generally have larger brains than other mammals, which may have developed to rely more on eyesight and process the stimuli of binocular vision, and primates rely less on the olfactory sense.  That change assisted the increase in intelligence that characterizes primates.  Lemurs diverged early in the primate line and rafted over to the newly isolated Madagascar in the early Eocene.  Lemurs were Madagascar’s only primates until humans arrived about two thousand years ago (and the largest lemurs, which were gorilla-sized, immediately went extinct).  A rodent-like sister group to primates that lived in North America and Europe went extinct in the Paleocene, as did many early mammalian lines.  In general, Paleocene mammals had relatively small brains, and many from that epoch are called “primitive,” although it did not necessarily mean functionally primitive when compared to modern mammals.  However, evolutionary “progress” is a legitimate concept.  The energy efficiency of ray-finned fish is probably responsible for their success, and the change from “primitive” to “modern” was usually related to the energy issue.  Evolutionary progress is an unfashionable concept in some scientific circles, but it is a clear trend over life’s history on Earth, and can be quite obvious during the eon of complex life.[394]

Paleocene mammals were rarely apex predators.  Crocodilians survived the end-Cretaceous extinction and remained dominant in freshwater environments, although turtles lived in their golden age in the Paleocene Americas and might have even become apex predators for a brief time.  The largest snakes ever recorded (1, 2) lived in the Paleocene and could swallow crocodiles whole.  In addition to birds' being among South America’s apex predators, a huge flightless bird thrived in North America and Europe and survived to the mid-Eocene, although the evidence today strongly suggests that it was herbivorous.  When the Great American Interchange began three mya, one of those flightless South American birds quickly became a successful North American predator.

People are usually surprised to hear that grass is a relatively recent plant innovation.  Grasses are angiosperms and only became common in the late Cretaceous, along with flowering plants.  With grass, some dinosaurs learned to graze, and grazers have been plentiful Cenozoic herbivores.  According to GEOCARBSULF, carbon dioxide levels have been falling nearly continuously for the past 150-100 million years.  Not only has that decline progressively cooled Earth to the point where we live in an ice age today, but carbon starvation is currently considered the key reason why complex life may become extinct on Earth in several hundred million years.  In the Oligocene, between 32 mya and 25 mya some plants developed a new form of carbon fixation during photosynthesis known as C4 carbon fixation.  It allowed plants to adapt to reduced atmospheric carbon dioxide levels.  C4 plants became ecologically prevalent about 6-7 mya in the Miocene, and grasses are today’s most common C4 plants and comprise more than 60% of all C4 species.  The rest of Earth’s photosynthesizers use C3 carbon fixation or CAM photosynthesis, which is a water-conserving process used in arid biomes.

In Paleocene oceans, sharks filled the empty niches left by aquatic reptiles, but it took coral reefs ten million years to begin to recover, as usual.  As Africa and India moved northward, the Tethys Ocean shrank, and in the late Paleocene and early Eocene, one of the last Tethyan anoxic events laid down Middle East oil, and the last Paleocene climate event is called the Paleocene-Eocene Thermal Maximum (“PETM”).  The PETM has been the focus of a great deal of recent research because of its parallels to today’s industrial era, when carbon dioxide and other greenhouse gases are massively vented to the atmosphere, causing a warming atmosphere and acidifying oceans.  The seafloor communities suffered a mass extinction and the PETM’s causes are uncertain, but the release of methane hydrates when the global ocean warmed sufficiently is a prominent hypothesis.  Scientists also look to the usual suspects of volcanism, changes in oceanic circulation, and a bolide impact.

The PETM, according to carbon isotope excursions, “only” lasted about 120-170 thousand years.  The early Eocene (c. 56 to 34 mya), which followed the PETM, is also known as one of Earth’s Golden Ages of Life.  It has also been called a Golden Age of Mammals, but all life on Earth thrived then.  In 1912, the doomed Scott Expedition spent a day collecting Antarctic fossils and still had them a month later when the entire team died in a blizzard.  The fossils were recovered and examined in London.  They surprisingly yielded evidence that tropical forests once existed near the South Pole.  They were Permian plants.  That was not long after Wegener first proposed his continental drift hypothesis, and was generations before orthodoxy accepted Wegener’s idea.  Antarctica has rarely strayed far from the South Pole during the past 500 million years, so the fossils really represented polar forests.  A generation before the Scott Expedition’s Antarctic fossils were discovered, scientists had been finding similar evidence of polar forests in the Arctic, within several hundred kilometers of the North Pole, on Ellesmere Island and Greenland.  Scientists were finding Cretaceous plants in the Arctic, which were much younger than Permian plants.[395]

Polar forests reappeared in the Eocene after the PETM, and the Eocene’s first ten million years was the Cenozoic’s warmest time and even warmer than the dinosaurian heyday.[396]  Not only did alligators live near the North Pole, but the continents and oceans hosted an abundance and diversity of life that Earth may have not seen before or since.  That ten million year period ended as Earth began cooling off and headed toward the current ice age, and it has been called the original Paradise Lost.[397]  One way that methane has been implicated in those hot times is that leaves have stomata, which regulate the air they take in to obtain carbon dioxide and oxygen, needed for photosynthesis and respiration.  Plants also lose water vapor through their stomata, so balancing gas input needs against water losses are key stomata functions, and it is thought that in periods of high carbon dioxide concentration, plants will have fewer stomata.  Scientists can count stomata density in fossil leaves, which led some scientists to conclude that carbon dioxide levels were not high enough to produce the PETM, so methane became a candidate greenhouse gas that produced the PETM and Eocene Optimum, and the controversy and research continues.[398]

However the hot times were created and sustained, Earth’s life reveled in the conditions.  Similar to reptiles' beating the heat and migrating into the oceans, some mammals did the same thing about 200 million years later, and cetaceans appeared.  Scientists were surprised when molecular studies found that whales share a common ancestor with even-toed ungulates, and the hippopotamus is the closest living relative to whales.[399]  Whales evolved in and near India, beginning about 50 mya, when the earliest “whale” surely did not resemble one and lived near water.  By 49 mya, whales could walk or swim.  A few million years later they resembled amphibians, and by 41 mya they became fully aquatic, for a transition from land to sea that “only” took eight million years.[400]  Whales quickly became dominant marine predators.  However, sharks did not go quietly and began an arms race with whales, which culminated 28 mya in C. megalodon, the most fearsome marine predator ever: a shark reaching nearly 20 meters in length and weighing 50 metric tons.  It could have swallowed a great white shark whole, as seen below (C. megalodon in gray, great white shark in green, and next to that is a man taking a break in C. megalodon's mouth).  (Source: Wikimedia Commons)

C. megalodon preyed on whales and had the greatest bite force in Earth’s history (although some estimates of T-rex bite strength equal it).  C. megalodon went extinct less than two mya, due to the current ice age’s vagaries.

Because of early Eocene Arctic forests, animals moved freely between Asia, Europe, Greenland, and North America, which were all nearly connected around the North Pole, and great mammalian radiations occurred in the early Eocene.  Many familiar mammals first appeared by the mid-Eocene, such as modern rodents, elephants, bats, and horses.  The earliest monkeys may have first appeared in Asia and migrated to India, Africa, and the Americas.  Europe was not yet connected with Asia, however, as the Turgai Strait separated them.  Modern observers might be startled to know where many animals originated.  Camels evolved in North America and lived there for more than 40 million years, until humans arrived.  Their only surviving descendants in the Western Hemisphere are llamas.  As with lemurs migrating to Madagascar from Africa, or marsupials to Australia via Antarctica, or monkeys migrating from Africa to the Americas, or Eocene mammalian migrations via polar routes, the migrants often involuntarily “sailed” on vegetation mats that crossed relatively short gaps between the continents.  Such a migration depended on fortuitous prevailing currents and other factors, but it happened often enough.

Several of the Eocene’s geologic events had long-lasting impact.  About 50 mya, the plates under India and Southern Asia began their epic collision and started creating the Himalayas, and Australia split from Antarctica.  The collisions of the African, Arabian, and Indian plates with the Eurasian plate created the mountain ranges that stretch from Western Europe to New Guinea.  After the Pacific Ring of Fire, it is the world’s most seismically active region.  Those colliding plates eventually squeezed the Tethys Ocean out of existence.  That event ended more than 500 million years of Tethyan sedimentation, beginning with the Proto-Tethys Ocean in the Ediacaran, continuing with the Paleo-Tethys Ocean in the Ordovician, and the Tethys Ocean appeared in the late Permian.  The Tethys Ocean’s existence spanned the entire Mesozoic and finally vanished less than six mya, at the Miocene’s end.[401]  Most of the world’s oil formed in the sediments of those Tethyan oceans and very little has formed since the Oligocene. 

The process of transforming anoxic sediments into oil requires millions of years.  When organic sediments are buried, most of the oxygen, nitrogen, hydrogen, and sulfur of dead organisms is released, leaving behind carbon and some hydrogen in a substance called kerogen, in a process that is like reversed photosynthesis.  Plate tectonics can subduct sediments, particularly where oceanic plates meet continental plates.  There is an “oil window” roughly between 2,000 and 5,000 meters deep; if kerogen-rich sediments are buried at those depths for long enough (millions of years), geological processes (which produce high temperature and pressure) break down complex organic molecules and the result is the hydrocarbons that comprise petroleum.  If organic sediments never get that deep, they remain kerogen.  If they are subducted deeper than that for long enough, all carbon-carbon bonds are broken and the result is methane, which is also called natural gas.  Today, the geological processes that make oil can be reproduced in industrial settings that can turn organic matter into oil in a matter of hours.  Many hydrocarbon sources touted today as replacements for conventional oil were never in the oil window, so were not “refined” into oil and remain kerogen.  The so-called oil shales and oil sands are made of kerogen (bitumen is soluble kerogen).  It takes a great deal of energy to refine kerogen into oil, which is why kerogen is an inferior energy resource.  Nearly a century ago in East Texas oil fields it took less than one barrel of oil energy to produce one hundred barrels, for an energy return on investment ("EROI" or "EROEI") of more than 100, in the Golden Age of Oil.  Global EROI is declining fast and will fall to about 10 by 2020.  The EROIs of those oil shales and oil sands are less than five and as low as two. 

During the early Eocene’s Golden Age of Life, forests blanketed virtually all lands all the way to the poles, modern orders of most mammals appeared, today’s largest order of sharks appeared, and coral reefs again appeared beyond 50 degrees latitude.  Many animals would also appear bizarre today.  One crocodile developed hooves, and an order of hooved mammalian predators lived, including the largest terrestrial mammalian predator/scavenger ever, which looked like a giant wolf with hooves.  The ancestors of modern carnivores began displacing those primitive predatory mammals in the Eocene, after starting out small.  A family of predatory placentals called bear dogs lived from the mid-Eocene to less than two mya.  Rhino-sized uintatheres and their bigger cousins the brontotheres were the Eocene’s dominant herbivores in North America and Asia.  Primates flourished in the tropical canopies of Africa, Europe, Asia, and North America.  Deserts are largely an Icehouse Earth phenomenon, and during previous Greenhouse Earths, virtually all lands were warm and moist.  Australia was not a desert in the early Eocene, but was largely covered by rainforests.  It must have been a marsupial paradise, as it would have been in Antarctica and South America, but the fossil record is currently thin, as rainforests are poor fossil preservers.

In the late Cretaceous, about 75 mya, New Zealand split from Gondwana, and by the end-Cretaceous event it, Madagascar, and India were alone in the oceans.  Madagascar was close enough to Africa for lemurs to migrate to it, but the only animals that repopulated New Zealand’s lands after the end-Cretaceous holocaust were those that flew.  From the end-Cretaceous event until the Maoris arrived around 1250-1300 CE (CE stands for “Common Era,” formerly designated with AD), birds were New Zealand’s dominant animals and had no rivals.  The only mammals were a few species of bat that migrated there in the Oligocene.  A recent finding of a mouse-sized mammal fossil shows that some land mammals lived in New Zealand long ago, possibly Mesozoic survivors and unrelated to any living mammals, but they died out many millions of years ago.  A few small reptiles and amphibians also lived there, and even a crocodile that died out in the Miocene, but New Zealand, unlike any other major landmass in Earth’s history, was the realm of birds.  The Maoris encountered giant birds, ecological niches filled with mammals elsewhere were filled by birds, and gigantic moas were the equivalent of mammalian browsers.  Before the arrival of humans, moas were only preyed upon by the largest eagle ever.  Of all ecosystems that would have appeared strange to modern eyes, New Zealand’s pre-human ecosystem has been perhaps the most beguiling to me, perhaps because it still existed less than a millennium ago.  It seemed like something that sprang out of Dr. Seuss’s imagination.  The Seuss-like kiwi is one of the few surviving specialized birds of that time.  The Maoris drove all moas to extinction in less than a century and quickly destroyed about half of New Zealand’s forests via burning.

For several million years, life in the Eocene was halcyonic, and at 50 mya, the Greenhouse Earth state had prevailed ever since the end-Permian extinction 250 mya.  But just as whales began invading the oceans 49 mya, Earth began cooling off.  The ultimate reason was atmospheric carbon dioxide levels that had been steadily declining for tens of millions of years.  The intense volcanism of the previous 200 million years waned and the carbon cycle inexorably sequestered carbon into Earth’s crust and mantle.  While falling carbon dioxide levels were the ultimate cause, the first proximate cause was probably the isolation of Antarctica at the South Pole and changes in global ocean currents.  During the early Eocene, the global ocean floor’s water temperature was about 13oC (55oF), warm enough to swim in, which was a far cry from today’s near-freezing and below-freezing temperatures.  The North Sea was warm as bathwater.  Radical current changes accompanied the PETM of about 56 mya, warming the ocean floor, and perhaps that boiled off the methane hydrates.  Whatever the causes were, the oceans were warm from top to bottom, from pole to pole.  But between 50 to 45 mya, Australia made its final split from Antarctica and moved northward, India began crashing into Asia and cut off the Tethys Ocean and the global tropical circulation, and South America also moved northward, away from Antarctica.  Although the debate is still fierce over the cooling’s exact causes, the evidence (much is from oxygen isotope analyses) is that the oceans cooled off over the next 12 million years, very consistently, although a brief small reversal transpired at about 40 mya.[402]  By 37-38 mya, the 200-million-year-plus Greenhouse Earth phase ended and the transition to today’s ice age was underway.  In the late Eocene, as the trend toward Icehouse Earth conditions began, deserts such as the Saharan, South African, and Australian formed.

That cooling caused the greatest mass extinction of the entire Cenozoic Era, at least until today’s incipient Sixth Mass Extinction.  With continents now scattered across Earth’s surface, there was no event that wiped nearly everything out as the end-Permian extinction did, nor were bolide events convincingly implicated.  But mass extinctions punctuated a 12-million-year period when Earth’s global ocean and surface temperatures steadily declined.  When it was finished, there were no more polar forests, no more alligators in Greenland or palm trees in Alaska, and Antarctica was developing its ice sheets.  A few million years later, another mass extinction event in Europe marked the Eocene’s end and the Oligocene’s beginning, but the middle-Eocene extinctions were more significant.[403]  All in all, there was about a 14-million-year period of cooling and extinction, which encompassed the mid-Eocene to early Oligocene, and Icehouse Earth conditions reappeared after a more-than-200-million-year hiatus.[404]

The Oligocene Epoch (c. 34 to 23 mya) was relatively cold.  In the 1960s, a global effort was launched to drill deep sea cores, the Glomar Challenger recovered nearly 20,000 cores from Earth’s oceans, and scientists had paradigm-shift learning experiences from studying those cores.  One finding was that Antarctica developed its ice sheets far earlier than previously supposed, and the cores pushed back the initial ice sheet formation by 20 million years, to about 34-35 mya; the first Antarctic glaciers formed as early as 49 mya.  The evidence included dropstones in Southern Ocean sediments, which meant icebergs.[405]  The event that led to Antarctic ice sheets was the formation of the Antarctic Circumpolar Current, which began to form about 40 mya and was firmly established by 34 mya, when the Antarctic ice sheets grew in earnest.  The current’s formation was caused by Antarctica’s increasing isolation from Australia and South America, which gradually allowed an uninterrupted current to form that circled Antarctica and isolated it so that it no longer received tropical currents.  That situation eventually turned Antarctica into the big sheet of ice that it is today.  It also radically changed global oceanic currents.  Antarctic Bottom Water formed, which cooled the oceans as well as oxygenated its depths, and it comprises more than half of the water in today’s oceans.  North Atlantic Deep Water began forming around the same time.[406]

Those oceanic changes profoundly impacted Earth’s ecosystems.  Not only did most warm-climate species go extinct, at least locally, but new species appeared that were adapted to the new environment.  Early whales all died out about 35 mya and were replaced by whales adapted to the new oceanic ecosystems that are still with us today: toothed whales, which include dolphins, orcas, and porpoises; and baleen whales, which adapted to the rich plankton blooms caused by upwellings of the new circulation, particularly in the Southern Ocean.[407]  Sharks adapted to the new whales, which culminated with C. megalodon in the Oligocene.  With the land bridges and small seas between the northern continents unavailable in colder times, the easy travel between those continents that characterized the Eocene’s warm times ended and the continents began developing endemic ecosystems.  Europe became isolated from all other continents by the mid-Eocene and developed its own peculiar fauna.  At the Oligocene’s beginning, the Turgai Strait was no longer a barrier between Europe and Asia.  More cosmopolitan Asian mammals replaced provincial European mammals, although from competition, an extinction event, or other causes is still debated, and competition is favored.  About half of European mammalian genera went extinct, replaced by immigrants from Asia, and some from North America via Asia.[408]

Africa was also isolated from other continents during those times and developed its own unique fauna.  The first proboscideans evolved in Africa about 60 mya, Africa remained their evolutionary home, and the one leading to today’s elephants lived in Africa in the mid-Oligocene.  Hyraxes are relatives of elephants, they have never strayed far from their initial home in Africa, and were Africa’s dominant herbivore for many millions of years, beginning in the Oligocene.  Some reached horse size, and a close relative looked very much like a rhino, with rhino size.  The rhinoceros line itself seems to have begun in North America in the early Eocene, and rhinos did not reach Africa until the Miocene. 

But the African Oligocene event of most interest to most humans was African primate evolution.  By the Eocene’s end, primates were extinct in Europe and North America, and largely gone in Asia.  Africa became the Oligocene's refuge for primates as they lived in the remaining rainforest.  The first animals that we would call monkeys evolved in the late Eocene, and what appears to be a direct ancestor of Old World monkeys and apes appeared in Africa at the Oligocene’s beginning, about 35-33 mya.  But ancestral to that creature was one that also led to those that migrated to South America, probably via vegetation rafts (with perhaps a land bridge helping), around the same time.  Those South American monkeys are known as New World monkeys today and they evolved in isolation for more than 30 million years.  For those that stayed behind in Africa, what became apes first appeared around the same time as those New World monkeys migrated; they diverged from Old World monkeys.  Scientists today think that somewhere between about 35 mya and 29 mya the splits between those three lineages happened.  Old World and New World monkeys have not changed much in the intervening years, but apes sure have.

The size issue is dominant in evolutionary inquiries, and scientists have found that in Greenhouse Earth conditions, animal size is relatively evenly distributed, and all niches are taken.  When Icehouse Earth conditions prevail, the cooling and drying encourages some animal sizes and not others, and mid-sized animals suffer, such as those early primates.  That may be why primates went extinct outside the tropics in the late Eocene.[409]  Tropical canopies are rich in leaves, nectar, flowers, fruit, seeds, and insects, while temperate canopies are not, particularly in winter.  Large herbivores lost a great deal of diversity in late-Eocene cooling, but the survivors were gigantic, and the largest land mammal ever thundered across Eurasia in the Oligocene.  Mid-sized species were rare in that guild.[410]

The earliest bears appeared in North America in the late Eocene and early Oligocene, and raccoons first appeared in Europe in the late Oligocene.  It might be amusing to consider, but cats and dogs are close cousins and a common ancestor lived about 50 myaCanines first appeared in the early Oligocene in North America about 34 mya, and felines first appeared in Eurasia in the late Oligocene about 25 mya.  Beavers appeared in North America and Europe in the late Eocene and early Oligocene, and the first deer in Europe in the Oligocene.  The common ancestor of today’s sloths lived in the late Eocene; South American giant ground sloths appeared in the late Oligocene.  The kangaroo family may have begun in the Oligocene.  The horse was adapting and growing in North America in the Oligocene.  By the late Eocene, the pig and cattle suborders had appeared, and squirrels had appeared in North America.

In summary, numerous mammals appeared by the Oligocene that resemble their modern descendants.  They were all adapted to the colder, dryer Icehouse Earth conditions, the poorer quality forage, and the food chains that depended on them.  In subsequent epochs, conditions warmed and cooled, ice sheets advanced and retreated, and deserts, grasslands, woodlands, rainforests, and tundra grew and shrank, but with a few notable exceptions, Earth’s basic flora and fauna has not significantly changed during the past 30 million years.

The Oligocene ended with a sudden global warming that continued into the Miocene Epoch (c. 23 to 5.3 mya).  The Miocene was also the first epoch of the Neogene Period (c. 23 to 2.6 mya).  Although the Miocene was nowhere near as warm as the Eocene Optimum, England had palm trees again, Antarctic ice sheets melted, and oceans rose.  The Miocene is also called the Golden Age of Mammals.  Scientists still wrestle with why Earth’s temperature increased in the late Oligocene, but there is no doubt that it did.  As the study of ice ages has demonstrated, many dynamics impact Earth’s climate, and positive and negative feedbacks can produce dramatic changes.  For the several million year warm period, carbon dioxide levels do not appear to have been elevated.  That data has been seized on by Global Warming skeptics as evidence that carbon dioxide levels have nothing to do with Earth’s temperature, but climate scientists not funded by the Hydrocarbon Lobby rarely think that way.  Carbon dioxide is only one greenhouse gas, and water is more important.  But as clouds demonstrate, water is notoriously ephemeral, constantly evaporating and precipitating, and some land can get a lot (rainforests), and some can get very little (deserts).  Icehouse Earth temperatures are more variable than Greenhouse Earth temperatures, particularly during the transitions between states, and an Icehouse Earth atmosphere contains less water vapor than a Greenhouse Earth atmosphere.

In recent years, Neogene temperatures have been the focus of intensive research.[411]  What appears to be the proximate cause of elevated temperatures was a dramatic change in global ocean currents.  The final closing of the Tethys Ocean, the isolation of Antarctica, the creation of that vast arc of Eurasian mountains, and the opening and closing of land bridges, such as in the Bering Sea and ultimately the land bridge between North and South America, created dramatic changes in ocean currents and global climate.  One result was fluctuating Antarctic Bottom Water.  Its production declined beginning about 24 mya, and its weakness lasted until about 14 mya.  Consequently, Earth’s oceans were not stratified as they are today, and warm water extended far lower into the oceans than it does today.  Also, it reduced the temperature gradient between the equator and poles, which drives global currents: the greater the differential, the more vigorous the currents.  It was still an Icehouse Earth, but the “mid-Miocene climatic optimum” was relatively warm.[412]  The past three million years are the coldest that Earth has seen since the Karoo Ice Age that ended 260 mya, but this Icehouse Earth phase began developing in the mid-Eocene.  While the steadily declining carbon dioxide levels of the past 150-100 million years is the ultimate cause of this Icehouse Earth phase, relatively short-term and regional fluctuations have had their proximate causes rooted in other geophysical, geochemical, and celestial dynamics.

Whatever the causes were, the early Miocene was warm, and as with Eocene migrations around the North Pole, migrating in the Arctic became easy again, and North America was invaded by Eurasian animals migrating across Beringia.  The prominent Menoceras descended from Asian migrants, and the strange-looking Moropus was also an Asian immigrant, which had claws on its forefeet, like a sloth’s.[413]  Pronghorns also migrated from Asia, and the first true cat in North America arrived.  Those North American days saw the last of a pig-like omnivore that was rhino-sized.  A giraffe-like camel lived then, and the first true equines appeared in the early Miocene and migrated to Asia from North America.  The general Oligocene cooling gave rise to tough, gritty plants, and deer, antelope, elephants, rodents, horses, camels, rhinos, and others developed hypsodont teeth, which had greatly expanded enamel surfaces for grinding those plants.[414]  Carnivores also migrated from Asia, such as an early bear, an early weasel, and bear dogs.  North America’s rodents and rabbits, which have a common ancestor from what became Eurasia, continued to diversify.  Later in the Miocene’s warm period, the trickle of Asian immigrants became a flood, including a giant bear dog that weighed up to 600 kilograms (1,300 pounds), and two large groups of immigrant rhinos, Teleoceras and several genera of aceratherine rhinos, displaced endemic ones.  In a late-Pliocene count of North American mammalian genera, a third were not native to North America.[415]  But North American fauna was unscathed compared to other continents.  Below is an artist's conception of Miocene North America.  (Source: public domain from Wikipedia)

The invasion of North America from Asia (with a little migration from North America to Asia), while important, was not as dramatic as what happened in Africa a few million years later.  About 24 mya, Africa and the attached Arabian Peninsula began colliding with Eurasia.  The once-vast Tethys Ocean had finally been reduced to a strait between the continents, and one of Earth’s most dramatic mammalian migrations began.  By about 18 mya, proboscidean gomphotheres had migrated from Africa and they reached North America by 16.5 mya.  An elephant ancestor left Africa but stayed in Asia.  As with the North American interchange with Asia, however, the greater change came the other way.  Rodents, deer, cattle, antelope, pigs, rhinos, giraffes, dogs (including the hyena), and cats came over, along with small insectivores and shrews.  Most of the iconic large fauna of today’s African plains originated from elsewhere, particularly Asia.[416]  Asian animals invaded and dominated Europe and Africa, and became abundant in North America.  In general, Asia had more diverse biomes and was the largest continent, so it developed the most competitive animals.  That principle, which Darwin remarked on, became very evident when the British invaded Australia in the 18th century: imports such as rabbits and foxes quickly prevailed, and endemic species were quickly driven to extinction.  The most important Miocene development for humans was African primate development, but that is a subject for a later chapter.

What seems to explain invader and endemic success with those migrations is what kind of continent the invaders came from, what kind of continent they invaded, and the invasion route.  Asia contains large arctic and tropical biomes, unlike any other continent.  North America barely reaches the tropics and only a finger of South America reaches high latitudes, and well short of what would be called arctic latitudes in North America.  Africa’s biomes were all tropical and near-tropical.  The route to Europe from Asia in the late Oligocene was straight across at the same latitude, so the biomes were similar.  About the same is true of the route to Africa from Asia.  Asian immigrants were not migrating to climates much different from what they left.  But the route to North America was via Beringia, which was an Arctic route.  Primates and other tropical animals could not migrate from Asia to North America via Beringia, and even fauna from temperate climates were not going to make that journey, not in Icehouse Earth conditions.  Oligocene North America was geographically protected in ways that Oligocene Europe and Africa were not, and it already had substantial exchanges with Asia before and was a big continent with diverse biomes in its own right.  It was not nearly as isolated as Africa, South America, and Australia were.

In South America, its animals continued to evolve in isolation, and some huge ones appeared.  In the Miocene, the largest flying bird ever known flew in South American skies; it looked like a giant condor, had a seven-meter wingspan, and weighed 70 kilograms.  The largest turtle ever lived in South America in the late Miocene and early Pliocene.  Glyptodonts first appeared, as well as a rhino-sized sloth, and some large browsers and grazers inhabited the large herbivore guild and looked like guild members on other continents, for another instance of convergent evolution.  In Australia, the Miocene fossil record is thin, but recent findings demonstrate that all Miocene mammals were marsupials, except for bats.  Kangaroos diversified into different niches; some were rat-sized and others became carnivorous.  Giant wombats foraged in the Miocene, and marsupial lions first appeared in the Oligocene, kept growing over the epochs, and when humans arrived about 50 kya, they were lion-sizedGiant flightless birds also roamed Australia, as they still did in South America, although just how carnivorous some may have been is debated.

In the oceans, the Miocene warm period meant expanding reefs, and tropical conditions again visited high latitudes, but not to the early Eocene’s extent.  Corals, mollusks, echinoids, and bryozoans all expanded and diversified in the warm period.[417]  Also, the first appearance of the closest thing to marine forests was in the Miocene, when kelp developed about 20 mya.  Kelp forest denizens such as seals and the ancestors of sea otters also appeared in the Miocene.  Seals are closely related to bears and otters, from the family that includes weasels.  Whales radiated in the warm Miocene oceans, and C. Megalodon was not far behind.  The first rorquals appeared in the Miocene, and they specialized in eating polar krill.  They were the last whales hunted nearly to extinction by humans, after all other species had been decimated.  Rorquals were fast swimmers and hunting them was not feasible until whaling became industrialized.

For 10 million years, Earth’s ecosystems readily adapted to the warmer temperatures, but Greenland began to grow its ice sheet about 18 mya, and by 14 mya the party was over and a steady cooling trend began that lasted all the way to the beginning of the current ice age, as the Antarctic ice sheets grew like never before.  Once again, tropical flora and fauna in high latitudes either migrated toward the equator or went extinct.  Reefs cannot migrate, so those outside the shrinking tropics died out.

The cause of the cooling at 14 mya is the subject of a number of hypotheses, one of which is mountain-building in that great arc created by colliding continents exposed rock that then absorbed carbon dioxide from the atmosphere in silicate weathering.  Around the time of the cooling, the Arabian Peninsula finally crashed into Asia and closed off the Tethys Ocean, which by then was more like the Tethys Strait there.  The last remnants of the Tethys consisted of an inland sea that includes today’s Caspian, Black, and Aral seas, and the Mediterranean Sea and Persian Gulf. 

Eurasian mountain building was not the only such Miocene event.  The Cascade Range, which I have spent my life happily hiking in, began erupting in the Miocene and rose in the Pliocene, and so is one of Earth’s younger and more rugged ranges.  The Sierra Nevada of California also formed in the Miocene, and the Andes grew into a formidable climatic barrier.  The Rocky Mountains also had renewed uplifting in the Miocene, and the Southern Alps of New Zealand were formed.  In the mid-Miocene, the northward movement of Australia toward Asia initiated the plate collision that created the Indonesian archipelago, which blocked tropical flow between the Indian and Pacific Oceans.[418]  Grinding tectonic plates have created the Pacific Ring of Fire, which is Earth’s most seismically active region, and contributed to many Cenozoic mountain-building and volcanic events, but it is only a pale imitation of Mesozoic volcanism.  The radioactivity that drives plate tectonics has steadily declined over the eons, and in about one billion years the plates will cease to move and Earth will become geologically dead, as Mars is today.  Life on Earth will then quickly end, if it has not already expired.  Complex life will likely be long gone by then.[419]

As the cooling event began 14 mya, drying came again, the tropics shrank, rainforests gave way to woodlands, woodlands gave way to grasslands, grasslands gave way to steppes, coniferous forests grew, angiosperm forests shrank, and deserts and tundra grew.  In the Miocene, another major new biome appeared: grasslands.  Grasses originated in the Cretaceous and dinosaurs ate them, but it was not until the mid-Miocene's cooling at 14 mya that grasslands first appeared as a biome’s foundation.  Those grasslands were the first savannas, and North America’s Miocene grasslands would have resembled Africa’s today.  As it is today, North America’s grasslands were on the Great Plains.  Instead of elephants there were mastodonts, instead of hippos there were hippo-like rhinos, in place of giraffes were long-necked camels, some of which indeed reached giraffe size and even far more massive, pronghorns played the antelope role, and horses played zebras.  The predators would have looked a little different, and hyena-like dogs, bears, and bear dogs brought down the big game.[420]

Those grasslands, with their attendant grazers and browsers, and their predators, appeared in the pampas of Argentina, the plains of the Ukraine, China, and Pakistan, and, of course, Africa.  Africa’s savanna fauna would have looked very familiar, with elephants, antelope (including impalas, gazelles, etc.), hippos, cats, hyenas, short-necked giraffes, horses, the first modern rhinos, and the like.  In Eurasia and Africa, with the land barriers removed, all the savanna biomes resembled each other.  In the late Miocene, C4 plants began to proliferate, especially in those grasslands.  Those grasslands grew when the ice age began.

Many plant families incorporate silica into their structures.  Diatoms also incorporate silica, and those are among the few life forms that use silicon, although it is one of Earth’s most plentiful crustal elements.  Diatoms seem to gain energy advantages by using silica, and plants seem to have structural advantages, but it is thought that plants also used silica for a defensive measure, as it helps make plants unpalatable.  Eating plants full of silica structures, called phytoliths, is like chewing sand.  This is particularly true with grasses, as phytoliths make chewing them a tooth-wrecking process, particularly for ruminants and their thorough chewing.  Grazing herbivores have heavily enameled hypsodont teeth (also called high-crowned teeth) to deal with the silica and generally tough grassland vegetation.  In North America, hypsodont herbivores proliferated while those without that heavy enamel (also called low-crowned teeth), which were browsers instead of grazers, declined.  By about nine mya, North American browsers had largely vanished and grazers dominated the new grasslands.[421]  Earth kept cooling and drying, and fewer than seven mya, steppe vegetation began replacing savanna-like grasslands, and forests were decimated.  This led to the greatest mass extinction in pre-human North America in the Cenozoic Era, as many species of horses, mastodonts, bears, dogs, and small predators went extinct, as well as mice, beavers, and moles.[422]  Asia and Africa were hit similarly, although not quite as hard as North America seemed to be, but South America and Australia hardly seemed affected at all.[423]  New Zealand’s surrounding seafloor changed from warm-water communities to the Southern Ocean communities that it has today.[424]

The Tethys Ocean finally evaporated, literally, at the Miocene’s end, and it was a spectacular exit.  As part of the collision of Africa and Europe, Morocco and Spain smashed together and separated the Atlantic Ocean from the Mediterranean Sea.  Then the entire Mediterranean dried out, as there was not enough regional precipitation to replenish the evaporation.  Then the crashing Atlantic waves eroded through the rock and the Atlantic again filled the Mediterranean Sea in floods that may have been Earth history's most spectacular.  The grinding continents then made another rock dam, the Atlantic was cut off again, and the Mediterranean once again dried up.  That pattern happened more than 40 times between about 5.8 and 5.2 mya.  Each drying episode, after the rock dam again separated the Atlantic from the Mediterranean, took about a thousand years and left about 70 meters of salt on the floor of the then Mediterranean Desert.  The repeated episodes created 2,000-to-3,000-meter-thick sediments of gypsum, which is formed from evaporating oceans, as trapped as the Mediterranean was.[425]  Creating so much gypsum partially desalinated Earth’s oceans (a 6% lowering), raised their freezing point, and may have contributed to the growth of Antarctica’s ice sheets.[426]  Also, those drying episodes initiated great droughts in Africa and may well have spurred the evolutionary events that led to humans.

The Pliocene Epoch (c. 5.3 to 2.6 mya) began warmer than today’s climate, but was the prelude to today’s ice age, as temperatures steadily declined.  An epoch of less than three million years reflects human interest in the recent past.  Geologically and climatically, there was little noteworthy about the Pliocene (although the Grand Canyon was created then), although two related events made for one of the most interesting evolutionary events yet studied.  South America kept moving northward, and the currents that once circled Earth at the equator in the Tethyan heyday were finally closed.  The gap between North America and South America began to close about 3.5 mya, and by 2.7 mya the current land bridge had developed.  Around three mya, the Great American Biotic Interchange began, when fauna from each continent could raft or swim to the other side.  South America had been isolated for 60 million years and only received the stray migrant, such as rodents and New World monkeys.  North America, however, received repeated invasions from Asia and had exchanges with Europe and Greenland.  North America also had much more diverse biomes than South America's, even though it had nothing like the Amazon rainforest.  The ending of South America’s isolation provided the closest thing to a controlled experiment that paleobiologists would ever have.  South America's fauna was devastated, far worse than European and African fauna were when Asia finally connected with them.  More than 80% of all South American mammalian families and genera existing before the Oligocene were extinct by the Pleistocene.[427]  Proboscideans continued their spectacular success after leaving Africa, and Stegomastodon species inhabited the warm, moist Amazonian biome, as well as the Andean mountainous terrain and pampas.  The Cuvieronius also invaded and thrived as a mixed feeder, grazing or browsing as conditions permitted.  In came cats, dogs, camels (which became the llama), horses, pigs, rabbits, raccoons, squirrels, deer, bears, tapirs, and others.  They displaced virtually all species inhabiting the same niches on the South American side.  All large South American predators were driven to extinction, as well as almost all browsers and grazers of the grasslands.  The South American animals that migrated northward and survived in North America were almost always those that inhabited niches that no North American animal did, such as monkeys, ground sloths (which survived because of their claws), glyptodonts and their small armadillo cousins (which survived because of their armor), capybaras, and porcupines (which survived because of their quills).  The opossum was nearly eradicated by North American competition but survived and is the only marsupial that made it to North America and exists today.  One large-hoofed herbivore survived: the Toxodon.  The largest rodent ever (it weighed one metric ton!) survived for a million years after the interchange.  Titanis, that large predatory bird from South America, also survived and migrated to North America and lasted about a million years before dying out.[428]  In general, North American mammals were more energy efficient and brainier, which resulted from evolutionary pressures that South America had less of, in its isolation.  They were able to outrun and outthink their South American competitors.  South American animals made it past South America, but none of them drove any northern indigenous species of note to extinction.

The scientific consensus today is that climate change or inhospitable biomes had nothing to do with North American mammals prevailing over South American mammals, which were significantly marsupials.  But the event that made the exchange possible, closing the gap between those continents, seems to have triggered the current ice age (and may have triggered interchange events, but would not have greatly influenced their outcome), and started about 3.5 mya, as the ocean gap began disappearing between the Americas.  The closure of the gap between North and South America led to today’s thermohaline circulation and created the Gulf Stream.  Although the Gulf Stream brings warm water to the North Atlantic and makes western Europe far warmer than it would otherwise be, the pre-ice-age Caribbean had low-salinity waters that drifted north into the Arctic, and because of that low salinity, the surface water did not sink but continued into the Arctic Ocean, warming it.  Once Pacific access was cut off, the Gulf Stream formed, which was saltier (hence denser) and sank as it cooled in the North Atlantic, sinking to the ocean floor before it got to Greenland, as is the case today.  This cessation of warm tropical waters to the Arctic seems to have triggered the growth of Arctic ice, particularly Greenland, which has the world’s second largest ice sheet after Antarctica.[429]  The change in currents killed off about 65% of mollusk species along the Atlantic coast of North America, and Florida’s reefs largely died out.  Caribbean reefs survived and much of the east North Atlantic’s warm water sea life migrated south into the tropics and the Mediterranean.  Japanese mollusks also survived the new currents.  The western North Atlantic cooled off, which led not only to Greenland’s ice sheet but the largest ice sheets of the current ice age have been North American, and their volumes even exceeded Antarctica’s.

At 2.6 mya, today's ice age began.  It ended the Neogene Period and initiated the Quaternary Period, which we still live in.  The term “Quaternary” is one of the last vestiges of Biblical influences on early geology and refers to the time after Noah’s flood.  The Quaternary’s first epoch is the Pleistocene, which ended 12 kya, at the beginning of this ice age’s most recent interglacial period.  The past 12 thousand years are called the Holocene Epoch

The current ice age has come in phases, and about a million years ago a steady rhythm of advancing and retreating ice sheets began and has recurred about every 100 thousand years, which is certainly related to Milankovitch cycles.  During this ice age, the land fauna was already adapted to Icehouse Earth conditions, and during 17 or more ice sheet advances and retreats over the past two million years, there were not any large-scale extinctions, except for the most recent one.  Below is an artist's conception of Pleistocene Spain.  (Source: public domain from Wikipedia)

In general, the large-sized fauna guilds that have dominated the past 40 million years were well represented on all continents.  Proboscideans thrived in all inhabitable continents and biomes that they could migrate to.  In North America, mammals whose size would astound (and terrify) modern observers included the short-faced bear (about the largest carnivore ever), a bison with horns two meters wide, the largest cat ever, giant mammoths, the largest wolf ever, and the largest beaver ever.  They only seem large because of today’s stunted remnant populations.  With the exception of the bison, they all lived for millions of years, through numerous ice age events, all to go extinct just after humans arrived, along with many other species, such as the American cheetah.  The other continents had similar giants.  Australia had a kangaroo about the size of a gorilla and the largest lizard ever.  Southeast Asia had the largest primate ever, which dwarfed today’s gorillas.  With only Africa and parts of Eurasia as partial exceptions, virtually all large fauna went extinct, worldwide, soon after human arrival, and how humans came to be is the subject of a coming chapter. 


Mid-Essay Reflection

This chapter falls at about this essay's midpoint, and humanity's role in this story has yet to be told.  As I conceived this essay, studied for it, wrote it, edited it, and had numerous allies help out, an issue repeatedly arose regarding the half of this essay just completed, and can be summarized with: "What was the point?"  Not everybody asked it and some understood, but others wondered openly and sometimes subtly what the purpose of this essay's first half was (and some asked if the essay had any point at all and considered my effort a waste of time).  This chapter is my reply, and I think it is important to understand.

My teachers from the first grade onward remarked on my fascination with nature.  Science always came easily to me.  A bizarre set of circumstances saw me trade my science studies for business studies in college, and that voice in my head led me to attempting to fulfil my teenage dreams of changing the energy industry.  I left the pure science path for applied science in the real world, and that experience radicalized me.  In 2002, when I finished my website largely as it stands today, I longed to one day resume my math and science studies.  Soon afterward, one of R. Buckminster Fuller's pupils remarked that my work was like Fuller's, and reading his work helped crystallize the paradigm that I had been groping toward.  When that paradigmatic view became clearer, I began the studies that resulted in this essay, and my efforts since 2007 were specifically directed toward writing it.

Could this essay's first half be considered an indulgence of my childhood fascination with nature?  That argument could have merit, but I have always been a "big picture" kind of thinker, even as a teenager.  I am writing this essay primarily to help manifest FE technology in the public sphere and help remedy the deficiencies in all previous attempts that I was part of, witnessed, heard of, or read about.  The biggest problem, by far, was that those trying to bring FE technology to the public had virtually no support from the very public that they sought to help.  My journey's most important lesson was that personal integrity is the world's scarcest commodity, and an egocentric humanity living in scarcity and fear is almost effortlessly manipulated by the social managers.  John Q. Public is only interested in FE technology to the extent that he can immediately profit from it.  Otherwise, he goes back to watching his favorite TV show.  It took many years of disillusionment for that to finally become clear to me.  While this essay and all of my writings are provided for free to humanity and anybody can read them, I intend to only reach a very tiny fraction of humanity with my writings, but that tiny fraction will be sufficient for my plan to succeed.  The readers that I seek have a formidable task ahead of them, but nothing less is required for my approach to have any hope of bearing fruit.  This essay and my other writings are intended as a course in comprehensive (also called "big picture") thinking.  Studying the details deeply enough to avoid misleading superficial understandings is also a key goal.  I am an accountant by profession, but one of the world's leading paleobiologists surprisingly read an early draft of this essay and informed me that it was one of the best efforts that he ever saw on the journey of life on Earth.  There was nobody on Earth whose opinion I would have respected more than his, so I do not think that I am asking readers of this essay's first half to humor me.  Every sentient being on Earth should know the rudiments of what this essay's first half covers.

Perhaps the most damaging deficiency in FE efforts, after self-serving orientation, was that the participants and their supporters were scientifically illiterate and easily led astray by the latest spectacle.  Scientific literacy can help prevent most such distractions.  While writing this essay, I was not only bombarded with news of the latest FE and alternative energy aspirants' antics, but I had to continually field queries from my allies regarding whether Peak Oil and Global Warming were conspiratorial elite hoaxes (or figments of the hyperactive imaginations of environmentalists and other activists), for two examples that readily come to mind.  Digesting this essay's material should have those questions answered as mere side-effects.  Far from being a hoax or imaginary, Peak Oil was reached in the USA in 1970 and globally in 2005-2006, and it is all downhill from there, and conventional oil will be almost entirely depleted in my lifetime.  Shale oil and tar sands are not solutions at all, although both were heavily promoted in the USA in 2014.  In every paleoclimate study that I have seen, so-called greenhouse gases have always been considered the primary determinant of Earth's surface temperature (after the Sun), and carbon dioxide is chief among them.  The radiation-trapping properties of carbon dioxide are not controversial in the slightest among scientists, and after the Sun's influence (which is exceedingly stable), declining carbon dioxide levels are considered to be the ultimate cause of the Icehouse Earth conditions that have dominated Earth for the past 35 million years.  Humanity's increasing the atmosphere's carbon dioxide content is influencing the ultimate cause of Icehouse Earth, and oceanic currents, continental configurations, and Earth's orientation to the Sun are merely proximate causes.  Increasing carbon dioxide can turn the global climate from an Icehouse Earth to a Greenhouse Earth, and the last time that happened, Earth had its greatest mass extinction event.  But scientists with conflicts of interest have purposefully confused the issues, and a scientifically illiterate public and compliant media have played along, partly because believing the disinformation seems to relieve us all of any responsibility for our actions.  Although scientific literacy can help people become immune to the disinformation and confusion arising from many corners, and reading this essay's first half can help people develop their own defense from such distractions, my goals for this essay's first half are far greater than that.

This essay presents a table of key energy events in the history of Earth and its ecosystems, and nearly half of the events happened during the timeframe covered by this essay's first half, which includes almost the entirety of Earth's history.  Humanity's tenure amounts to a tiny sliver of Earth's history, and surveying pre-human events was partly intended to help readers develop a sense of perspective.  We are merely Earth's latest tenants.  We have unprecedented dominance, but we are quickly destroying Earth's ability to host complex life.  As my astronaut colleague openly wondered, is that the act of a sentient species?  Is our path of destruction inevitable, as we plunder one energy resource after another to exhaustion?  Will depleting Earth's hydrocarbons be the latest, greatest, and perhaps final instance?

Few people on Earth today have much understanding of the relationship between energy and economic activity.  Most people think that money runs the world, when it is only an accounting fiction.  Money by itself is meaningless, and financial measures of economic activity can be highly misleading.  I noted long ago that scientists had little respect for economists and their theoriesHistory's greatest energy baron and richest man funded the leading economic institution that obscured the role of energy while exalting money.  What a coincidence.  Understanding this essay's first half will help with comprehending the last half, and the connections between energy, ecosystems, and economics should become clear.

Paleobiologists are fascinated with the history of life on Earth, and I share their sense of wonder.  If I can impart the slightest sense of that to my readers, this essay's first half will be successful for that alone.  However, just as a math curriculum builds on itself, as each class forms the foundation of the next one, this essay's first half is intended to help readers develop a foundational understanding.  With that foundation built, the information in this essay's last half can make a profound impact and help readers achieve personal paradigm shifts.  That is essentially this essay's purpose.  Studying this essay's first half is far from a waste of time for those whom I seek, but is vitally important. 


The Path to Humanity

Chapter summary:

From their Cretaceous origins through their radiations and extinctions in the Eocene, primates continued evolving.  About 35 mya, Old World and New World monkeys, called higher primates (also called simians or anthropoids), split.  Simians seem to have split from a group also ancestral to prosimians.  Today’s prosimians include lemurs, lorises, tarsiers, and bush babies.  During the Oligocene, Africa and Southeast Asia became primate refugia.  Tarsiers have lived in Southeast Asia continually for about 45 million years, and the only survivors of their evolutionary line live on islands near Southeast Asia.  Primate history in the late Eocene and Oligocene is controversial today.  The fate of an extinct group from primates’ wide geographical range in the early Eocene is debated, but they seem at least cousins to ancestors of non-tarsier prosimians, if not ancestral to them.

This chapter and the next will survey the disputes of evolutionary lineages and geography that continue all the way to Homo sapiens.  The debates and drama have two primary sources: the first is that humans are descendants from those lines and the second is that there has been a desire to demonstrate that humanity radically differs from its ancestors, possessed of unique traits, not only in degree, but in kind.  The debates seem to get fiercer the closer the primate line gets to modern humans.[430]

Early primate migrations and extinctions led to a disjointed geographical distribution, as they could only live in tropical canopies.  When tropical forests shrank in the cooling conditions that led to the current ice age, primates such as tarsiers found themselves in isolated refugia.  In the late Eocene and late Miocene, when tropical canopies disappeared, the primate lines inhabiting them went extinct unless they used an escape route to a surviving tropical forest.  

Although simians may have first appeared in Eocene Asia, when the late-Eocene cooling began, Africa became the primary primate refuge.  Around the early Oligocene, a splinter group migrated to South America from Africa and evolved in isolation for the next 30 million years.  Just as dinosaurs marginalized early mammals, simians marginalized prosimians, beginning in the Oligocene.  Today’s prosimians either live where simians do not, or where they coexist with simians, they are nocturnal.  Prosimians have simple social organization; most nocturnal prosimians lead solitary existences.  Lemurs living in daylight have societies of up to 20.  Monkeys have far more complex social organization than prosimians, and baboon societies number up to 250 individuals, although societies of about 50 are typical.  Capuchins are considered the most intelligent New World monkeys, and their societies have between 10 and 40 members.  Studies of simian societies have shown them engaging in crude versions of human politics, which have even been called Machiavellian, which has caused some to leap to Machiavelli’s defense.

From their origins around 40-45 mya, monkeys continued evolving in Africa’s Oligocene forests, and between 35 and 29 mya, according to molecular clock studies, some African monkeys began evolving into apes, and Proconsul, a controversial transitional fruit-eating monkey, appeared about 25 mya.[431]  Mary Leakey’s most famous find was a Proconsul skull in 1948.[432]  The primary differences between apes and monkeys are that apes are larger, lost their tails (not having as much need for balancing on tree limbs), and they have a stiffer spine and larger brain.[433]  Apes began the descent from canopy to ground.  Simians will eat fruit if they can, but some early monkey/apes developed thicker tooth enamel.  That change meant that they no longer subsisted on soft fruit and leaves, but were eating coarser vegetation, which was a consequence of living in a cooler, dryer world.[434]  No Miocene apes were as adapted to leaf eating as today’s apes and leaf-eating monkeys.  As with the first tetrapods to leave water, a prominent speculation today is that those monkeys/apes changed their diets and left the trees as they lost the competitive game with other canopy-dwellers.[435]  Gibbons split from the line that became great apes about 22 mya and became masters of tree-living, with their swinging mode of locomotion.[436]

By 20-17 mya, apes became common in East Africa, some became large, up to 90 kilograms, and some resembled gorillas.[437]  Nearly all apes eventually abandoned tropical canopies, and although monkeys were scarce in the Miocene, they stayed and dominate them today.  The number of monkey species increased and ape species have decreased rather steadily over the past 20 million years.[438]  With that late-Oligocene warming that continued into the Miocene, tropical forests began expanding again.  When Africa and Arabia finally crashed into Eurasia and began that great invasion from Asia, apes escaped Africa beginning about 16.5 mya.  They had thickly enameled teeth suited to the non-fruit foods available outside rainforests.[439]  Their migrations resulted in new homes that spanned Eurasia, from Europe to Siberia to China to Southeast Asia.[440]  It was a spectacular adaptive radiation that tallied more than 20 discovered ape species so far, and has been called the Golden Age of Apes.[441]  That is how gibbons and orangutans arrived in Asia.  About 14 mya in Africa, the ancestors of today’s great apes may have appeared, and about 12.5 mya the likely ancestors of orangutans appeared in India.  By that time, tropical forests were shrinking once again and orangutans continued down their evolutionary path, isolated from their African cousins.  One possible ancestor lived in Southeast Asia about 9-7 mya.  A descendant from the orangutan line became the largest primate ever, at three meters tall and more than 500 kilograms.  Below is a comparison of that primate to humans.  (Source: Wikimedia Commons)

It lived for nine million years, only to go extinct about when humans arrived, and might have something to do with Yeti legends.  Today’s orangutans are confined to two Indonesian islands, Borneo and Sumatra, and are particularly endangered on Sumatra.  All apes besides humans are endangered today due to human activities.

In the mid-Miocene cooling’s early stages, beginning about 14 mya, apes were richly spread across Eurasia and were adapted to the hardier diets that less-tropical biomes could provide, and one from Spain 13 mya may well be ancestral to modern humans and other great apes.[442]  It largely lived on the ground and had a relatively upright posture.  Its discovery threw previously accepted ideas of ape evolution into disarray.  The idea of apes ancestral to humanity living beyond Africa is a recent one, but is gaining acceptance.[443]  Important new fossils are found with regularity, as with all areas of paleontology, but the most plentiful funding is for investigating human ancestry.  A 1996 discovery of a Miocene ape in Turkey, with features common to both orangutans and African apes, led to questioning whether some key ape features are ancestral or convergent.[444]  One early fossil ape finding is still highly controversial as to where it fits into the evolutionary tree, as it had ape and monkey features but lived 10 million years after the hypothesized ape/monkey split.[445]  The great ape lineages are the subject of considerable controversy today, and the human ancestral tree is regularly shaken up with new findings.

Around 10.5 mya, after Eurasian forests began thinning out, African rainforests began losing their continuity, broke up into isolated patches, and woodlands and grasslands appeared along rainforest edges.[446]  Whether the direct ancestor of humans moved “home” to Africa from Eurasia around 9-10 mya as the Miocene cooling progressed, or indigenous African lines led to humans, is currently controversial.  However, by seven mya the evolutionary line to humans was firmly established in Africa, as the forests that could support apes in near-African Eurasia disappeared, and the last of those lines went extinct about eight mya.  The gorilla line may have split from the human line about seven mya, but recent findings may push that back to ten mya.  Whatever the timing really was, there is little scientific debate whether humans and gorillas descended from the same line and that that ancestor lived in Africa.  The genome sequencing projects show that great ape DNA and human DNA are very similar.  Chimpanzees and bonobos, our closest surviving cousins, share more than 98% of their genes with humans.  About 94% of human DNA is identical to chimps’Gorillas have slightly less DNA in common with humans, and orangutans understandably have the greatest divergence.  Humans also lost a chromosome that other great apes retained. 

The terminology of the ape/human line can be confusing to a lay reader, as it gets sliced ever finer as humanity’s time is approached, and I will avoid some of the many “homi” and “homo” terms used to describe families, genera, and species.  Homo in Greek meant “same,” while homo in Latin meant “human,” which is the meaning used in ape taxonomy.  The ape clade is the superfamily called Hominoidea, and all of its branches have “hom” prefixes.  Members of the genus “Homo” are of the solely human line.  Homo habilis is perhaps the genus’s first member, although its status is still unsettled. 

Orangutans are the most arboreal great ape, and in Africa the great apes had definitely left the trees as their daytime residence, although they slept in trees to avoid predators.  Gorillas can primarily subsist on leafy vegetation, although the staple of the western lowland gorilla, which is the most prevalent gorilla species, is still fruit.  Mountain gorillas primarily subsist on leaves.  Gorillas usually have a smaller daily range than chimpanzees have and live in the heart of rainforests; what became chimpanzees were probably pushed to the margins by their larger cousins and live more along a rainforest’s woodland fringes.  They have to range relatively far to find their staple: fruit.  Since their diet is more diverse and they can survive in more varied environments, the chimpanzees’ range is far larger than that of gorillas.  Like the largest quadrupedal herbivores, gorillas ingest a great deal of low-calorie vegetation each day and are hindgut fermenters that extract energy from cellulose, which humans cannot do.  Chimpanzees are also hindgut fermenters.  As with all organisms, the ecological situation of great apes influenced their evolution, including social organization and behaviors.  This has been increasingly studied since the 19th century and has provided valuable insights into humanity, some of which follow.

The chimpanzee and human lines seem to have split between five and seven mya, and some recent estimates are as low as 4.6 mya.  The species perhaps the closest to that split found so far dates to about seven mya, but the findings have also been used to argue for pushing the human/chimpanzee split back to 13 mya.  Whatever the timing that scientists eventually agree on, the splits of orangutans first, gorillas second, and chimpanzees last (and the bonobo split arguably about a million years ago) almost certainly will not change.  The end of the Tethys Ocean between 5.8 and 5.2 mya may have been the reason for the split, as the resulting droughts from those Mediterranean Sea drying episodes further shrank the African rainforest.  As with so many other evolutionary events, the line that led to humans began to leave the trees as the losers of rainforest life and adapted to new environments probably out of necessity, not a sense of adventure and opportunity.  Those apes pushed to the margins learned to walk upright and learned to eat new foods such as roots.[447]

A recent find of a possible human-line ape may even displace australopithecines as humanity’s ancestors, relegating them to a side-branch that went extinct.  These are still the early days of investigating human ancestry, and rapidly and dramatically changing ideas about the evolutionary path to humanity will continue.  That is partly because the fossil sparseness has only been recently expanded by numerous teams digging around Africa, with dreams of the ultimate find haunting their sleep.  Darwin speculated that humans evolved in Africa, but in the early 20th century, Asia was considered the likeliest evolutionary home of humans.  In 1921, an early protohuman skull was discovered in a Rhodesian mine, and in 1924 an even more primitive protohuman skull was discovered in a South African mine.  Africa became the focus of investigating the human line and accelerated with the work of what became the Leakey dynasty, which began with Louis Leakey’s checkered but ultimately triumphant career.

That human/chimp find of 6-7 mya had thick teeth that meant that it had abandoned the arboreal ape diet and brings up perhaps the single biggest question of the early human line: “When did our ancestors became bipeds?”[448]  One piece of evidence for bipedalism is where the spinal cord enters the skull; if it is underneath the skull, it suggests an upright posture and, hence, bipedalism.  There is disputed evidence that that seven mya ape had a skull hole that meant bipedalism.  Skull and vertebrae evidence, changes in the shoulders, arms, and hands of apes from Proconsul onward, as well as the pelvis, legs, knees, ankles, and feet, are used whenever relevant ape fossils are found to determine what kind of posture they had, all the way from swinging from branches to walking upright.  The great range of motion of the human arm has that arboreal heritage to thank.

Part of that late Miocene ground-foraging existence probably included digging roots, as chimpanzees do today.[449]  Around 4.4-3.9 mya came the earliest celebrity humanoid fossil, called Ardi today.  Ardi has an older cousin, maybe an ancestor, from 5.8-5.2 mya, but Ardi is the most complete early great ape fossil.  Ardi had about the same-sized brain as a chimpanzee, but she may have walked upright.  Ardi had relatively delicate features, which suggest that she did not eat roots and tough food, but soft fruits obtained by nimbly climbing trees.  Her canine teeth are markedly less prominent than chimpanzee teeth, which has led to speculation that her species was less aggressive than chimpanzees. 

Although the human lineage through those early protohumans can be shuffled, perhaps radically, with the next new finding, today’s anthropologists are fairly confident that the human line passes through australopithecines.[450]  The first ones appeared about 4.2-4.1 mya, and about 3.9 mya, the most famous australopithecine species appeared, called Australopithecus afarensis, of which the original humanoid fossil rock star, Lucy, was a member.  She lived about 3.2 mya, and one of Mary Leakey’s greatest finds was biped footprints, probably of Lucy’s A. afarensis, dated to about 3.6 mya.  But all early humans up to australopithecines also had shoulder and arm adaptations for climbing in trees, bipedal or not, and all early humans climbed at least every night to sleep.  Sleeping on the ground is not done by great apes today except gorillas (and some chimps do), and adult male gorillas are the most regular ground sleepers, and smaller gorillas sleep in trees.[451]  Gorillas are rarely preyed upon in their rainforest homes, other than by humans, rival gorillas, and the stray leopard, which generally avoids large males.  African predators made sleeping on the ground infeasible for primates, and none does today in the kinds of woodland environments where early humans lived.  The human line may have not slept on the ground until it controlled fire.

The study of intelligence is a young science, and the relationship of brain size (both absolute and relative) and structure to what is called intelligence is currently subject to a great deal of research and controversy, and even the definition of intelligence is hotly debated.  The cerebral cortex appeared with mammals, and is the key structural aspect of brain evolution that led to human intelligence.  The mirror test attempts to determine which animals have self-recognition, and those suspected of being the most intelligent have passed the test, including all great apes, cetaceans, elephants, and even a bird.  Humans do not pass the mirror test until about 18 months of age.[452]  There is great debate between those embracing "rich" versus "lean" interpretations of behavior and intelligence observations among animals, in which seemingly complex thinking can be an illusion.[453]

Many human mental traits exist in more rudimentary form in other animals, but human thought seems far more complex and sophisticated.  Feats such as language with grammar may be a unique human achievement, which provides evidence of the greater mental ability of humans, and our tools provide the best evidence of advanced human cognitive abilities. 

Intelligence can confer great advantages, and the encephalization of theropods is an early indicator of its benefits.  For instance, spider monkeys have brains about twice the size of howler monkeys, which is thought to be due to their larger societies (about twice as large), and the fact that their diet is more than 70% fruit, while the howler monkey’s diet is less than half fruit, and leaves provide twice the proportion of the howler’s diet over the spider’s.  Remembering where and when fruit is ripe, and navigating more complex social environments, takes greater thinking power.  Just as with howler and spider monkeys, chimpanzees have to range far to find fruit, which is their staple, while gorillas can more readily eat nearby leaves, and chimps have more complex social lives than gorillas do.  Chimpanzees also have proportionally larger brains than gorillas' and are considered more intelligent.

Did the larger brain lead to the behaviors, or did the behaviors lead to the larger brain?  If other evolutionary trends have relevance, they mutually reinforced each other and provided positive feedbacks; down one evolutionary line it reached runaway conditions that led to the human brain.  The initial behavior was probably the use of a body part (the brain) for a new purpose, and its success led to selective advantages that led to mutual reinforcement.  Although it is by no means an unorthodox understanding, I think that the likely chain of events was walking upright freed hands for new behaviors, which led to new ways of making and using tools, which enhanced food acquisition activities.  This allowed the energy-demanding brain to expand, as well as related biological changes, which led to more complex tools and behaviors that acquired and required even more energy.  That, in short, defines the human journey to this day, which the rest of this essay will explore.  There has never been and probably never will again be an energy-devouring animal like humanity on Earth, unless it is a human-line descendant.

Many traits of apes, including humans, are evident in monkeys.  Sexual dimorphism, which is when species have genders of different shapes and sizes, is a minor phenomenon among prosimians.[454]  But it is pronounced in simians, especially apes, and is why men are larger and stronger than women.  Its ultimate cause is probably sexual selection: how females choose their mates.  A prominent hypothesis is that early monkey troupes had males as sentinels guarding the territorial perimeter and protecting the female-dominated core where offspring were cared for and where food was.  A defensible food source was the key attribute of any simian territory.  Most primates are territorial, and extreme territorial behaviors can be seen in monkeys and apes, including murder, with its apotheosis in humans.

Nursing led to more involved mammalian parenting behaviors and increased female participation, in addition to the great investment that females have in gestating offspring.  Larger simian males are more likely to become dominant, and dominant males often get the most and best food and have enhanced reproductive rights, as females are attracted to them.  Virtually all monkey and ape societies are male-dominated, and the modern ideal of human females freely choosing their mates (or, perhaps more importantly, non-dominant males choosing their mates, if they get to mate at all) is rarely in evidence in monkey and ape societies, and is a new phenomenon for humans.  The phenomenon of attractive women mating with rich and powerful men has deep roots in the simian evolutionary journey. 

In addition to their Machiavellian social activities, monkeys are quite vocal and a key social behavior is grooming, which is integral to forming social bonds.  In crab-eating macaques, grooming seems to be a form of foreplay or even a payment for sex, and male chimpanzees and capuchins have paid for sex, so the world’s oldest profession may be quite old indeed.  Vocalizations and grooming behaviors become more prominent in gorillas and chimpanzees (orangutan social organization is markedly different from that of African apes).  A recent hypothesis is that gossip largely replaced grooming with humans as a cheap way to form social bonds, and “cheap” is almost always measured in terms of energy and relates to how much metabolism is devoted to an activity.  Chimpanzees spend about 20% of their day grooming, and humans spend about 20% of their day in conversation.[455]  The more intelligent a primate is, the larger its society can be, to navigate all of those social relationships.  Chimp societies can reach to 120 members and humans can double that, to 250 or so, which probably not coincidentally is around the size of the group that geneticists think left Africa perhaps 60-50 kya and conquered Earth.

There are three primary survival requirements for any species: obtain nutrients (always primarily energy), avoid becoming nutrients, and perform those first two tasks long enough to produce offspring.  If those requirements are not met, the species will go extinct.  The eating instinct outranks the sex drive, but avoiding becoming food is where the most energetic behaviors can usually be found.  Primal survival instincts take over during the fight-or-flight response.  In humans, that fear response shuts down the neocortex to enable the body to perform feats of physical survival.  That is when adrenaline pumps.  All evolutionary adaptations studied by scientists always have those three primary requirements girding the explanatory framework.

Female simians usually stay within their society of origin, while males leave.  That is how simians prevented inbreeding, but that pattern is reversed in chimpanzee and gorilla societies, in which females usually leave.  Sexual coercion of females is common behavior among simians.  Bonobos and gibbons are among the few simians that overcame it, and it seems to have been due to ecological dynamics.  Humans have partially discarded that behavior during the industrial age.  Those are obviously highly charged areas of behavioral research, and sociobiology is a highly controversial scientific discipline.[456]  A falsifiable hypothesis is arguably the sine qua non of science, and behavioral sciences have often been plagued with a lack of them, going back to Freud, which has caused some to say that psychology is not really a science.  This essay will soon sail into some of those murky waters.

Becoming bipedal freed human-line hands for other uses.  The non-human great apes all have long fingers and short thumbs.  Below is a comparison of chimpanzee and human hands.  (Source: Wikimedia Commons)

Ardipithecus ramidus is an early example of the growing thumb in the ape milieu from which the human line descended.  Changes in australopithecine hands may have been at least partly adaptations to throwing and wielding clubs.[457]  Lucy’s species existed for about a million years and went extinct about 2.9 mya, but it might have been one of those “happy ending” extinctions when the descendants eventually changed to become new species.  What seems clear today is that australopithecine species were scattered around Africa, as they were a highly successful line.  Lucy’s species lived in eastern Africa, around Ethiopia, while other australopithecines lived in southern Africa and others lived in central Africa, where Miocene ape fossils have also been found.[458]  Not long after Lucy’s species disappeared, an australopithecine line appeared called “robust australopithecines,” and its members have been assigned their own genus, and Lucy and her cousins are called “gracile.”  The robusts had huge jaws and teeth, and a dramatic sagittal crest anchored their powerful chewing muscles.  A member of the robust line is nicknamed “Nutcracker Man” because of its gigantic teeth. 

Several lines of evidence have converged and more evidence is regularly amassed, which is telling a story of dramatic and rapid climate change spurring vegetation changes that initiated evolutionary adaptations in the cradle of humanity.  Sediment cores off of East Africa in the Arabian Sea, land sediment records in East Africa, combined with studies of carbon-12/13 ratios of fossil teeth, are telling an interesting and familiar tale of human origins.  Three mya, as Earth was moving toward an ice age and the climate dried, the familiar grasslands of the Serengeti appeared for the first time.  C4 grasses have higher proportions of carbon-13, and so will animals that eat them.  The expanding C4 grasslands coincided with the disappearance of Lucy's species and the appearance of the robusts that ate generous amounts of C4 plants (or perhaps eating animals that ate those plants), probably from those expanding grasslands.[459]

Becoming bipedal allowed for far greater mobility than knuckle-walkers were capable of, and farther excursions from the safety of trees became possible.  But ranging farther from the safety of trees was also dangerous.  Like Proconsul, key australopithecine fossil finds were apparently where the remains of predator meals accumulated, usually in caves.[460]  Those early apes on the path to humanity were the hunted, not hunters.  Cats such as leopards feasted on australopithecines, and one robust skull showed leopard puncture marks.[461]  Most surviving bones were those from body parts more difficult to eat, with less flesh on them, so predators left those parts largely intact.  Fossil hunters discovered body parts such as jaws, teeth, hands, and feet.  Skull finds are rare.

The woodland fringes that australopithecines and their relatives lived in were markedly different from where gorillas and even chimpanzees exist today.  Today’s most successful primates in fringe environments such as those that australopithecines operated in are macaques, which also suffer high rates of predation.  The social organization of humanity’s early ancestors may well have been more like macaques than chimpanzees.[462]

Earth’s evolutionary tree of life has many branches, so many that no one person can become intimate with all of them, and innumerable lines of animals arose, radiated, and died out, almost always going out with a whimper instead of a bang.  All australopithecine branches came to their ends, except perhaps for the line that led to humans.  About 2.6-2.5 mya, just as the current ice age began, a gracile australopithecine lived in eastern Africa, another in southern Africa, and the robust australopithecine with that amazing skull lived in eastern AfricaThe oldest manufactured stone tools yet discovered of a recognized culture were associated with that east African gracile australopith.  Earlier tools were likely made at least 3.4-3.3 mya, probably by australopiths of Lucy's species, and making them may well have been part of australopith culture for millions of years.  Many non-human animals use tools, and some even make them.  But all early tools would have been made of twigs, bones, sticks, unshaped rocks, and the like, and they have not left behind much evidence for scientists to study.  Stone tools were an energy technology that mimicked the teeth and claws of more specialized animals.

Chimpanzees are the most tool-using non-human great ape, and female chimps make and use tools more often than males do.  One problem with studying today’s animals and applying those findings to their ancestors is that their line has evolved too.  The ancestor of chimpanzees when the split was made with the human line did not look like today’s chimpanzees, and probably did not act quite like them.  However, chimpanzees and gorillas adapted to environments that have not remarkably changed for the past 8-10 million years, and it is unlikely that they have dramatically changed over that time.  Orangutans are similar.  Scientists have argued that since there is little evidence of morphological change in those great apes in the intervening years since they split from the human line, particularly in their cranial capacity, that they probably act similarly today and have similar capacities to their distant ancestors.[463]  Today’s chimps have nearly the same-sized brains as australopithecines did.  They make and use tools, and an orangutan was even trained in captivity to make stone tools.  All great apes have learned to use sign language and some even invent their own signs

I think it very reasonable to believe that relatively sophisticated tool use among humanity’s ancestors predates, perhaps by several million years, those stone tools dated to 3.4-3.3 mya.  Tools may be hundreds of millions of years old, and insects, fish, cephalopods, and reptiles use tools today.  The protohuman equivalent of Nikola Tesla (although it may have been a female) discovered how to bang two rocks together to create a hard edge used for cutting, perhaps with a little inventor’s serendipity.  It may not be possible to overstate the significance of that invention.[464]  More than a million years of free hands, due to australopithecine bipedal posture, probably led to the most significant tool-making event in Earth’s history to that time.  The shortening fingers and lengthening thumbs of australopithecines led to more dexterity, and in training today’s great apes to make stone tools, their relative lack of dexterity has been noted as an impediment.  Also, the increasing dexterity of the protohuman hand is linked with neurological changes, from the hands to the brain, as early protohumans took tool-making to a new level, in another case of mutually reinforcing positive feedbacks.[465]

Although that australopithecine may have been the smartest member of its species, with an ape IQ that went off the scale, his or her brain was the same size as the fellow members of his or her species, but that would not last long.  The swift climb to the appearance of Homo sapiens had begun. 


Tables of Key Events in the Human Journey

Timeline of Humanity’s Evolutionary Heritage

Human Event Timeline Until Europe Began Conquering Humanity

Human Event Timeline Since Europe Began Conquering Humanity

Table of Humanity’s Epochs

Humanity’s Evolutionary Heritage

Group Humans Likely Descended From

Direct Human Ancestor, or a Close Relative to It


Time When Ancestor First Appeared

Earliest life forms

Last common ancestor of all life on Earth

A form of bacteria, with many traits unique to all life on Earth.

c. 3.8 – 3.5 bya

Bacteria and archaea

First complex cell

An archaean enveloped a bacterium, and both lived.

c. 2.1 – 1.6 bya


First sexually reproducing organism

This innovation accelerates evolution.

c. 1.2 – 1.0 bya

Motile eukaryotes


Motile eukaryote was an ancestor to animals.

c. 900 mya

Unicellular organisms

First multicellular organism

Was probably sponge-like.

c. 760 – 660 bya

Immobile animals

First mobile animal

Was probably like a jellyfish.

c. 580 mya

Motile animals


First animal with a brain.

c. 550 mya


Acorn worm

Early animals with breathing and circulatory systems.

c. 540 mya

Fish-like ancestors to vertebrates


First animal with a spinal cord.

c. 530 mya

Eel-like fish


First true fish.  Used gills exclusively to breathe.

c. 505 mya

Jawless fish


First fish with jaws.

c. 480 mya

Cartilaginous fish

Guiyu oneiros

First bony fish.

c. 420 mya

Bony fish


First fish with lobed fins, which later became legs.

c. 410 mya

Lobe-finned fish


First fish that begins developing legs.

c. 380 mya

Leggy fish


First fish to crawl on land.

c. 375 mya

Crawling fish


First fish to walk on land.

c. 374 mya



First amphibian.

c. 365 mya



First reptile.  First amniote

c. 312 mya



First synapsid.

c. 306 mya


Sphenacodonts (AKA pelycosaurs)

Lost its scales, and teeth begin to become specialized.

c. 295 mya



First therapsid.  Could breathe and eat at the same time.

c. 270 mya



May have been warm-blooded.

c. 265 mya



Jaws changed, freeing up bones to eventually form middle ear.

c. 260 mya



More mammalian traits than reptilian.

c. 230 mya



Nursed young, had one replacement of teeth.

c. 225 mya



Cranial features suggest developing mammalian brain.  Mammalian brains have first and only cerebral cortex.

c. 225 mya



First placental mammal.

c. 160 mya

Placental mammals


Tree-dwelling ancestor of rodents and primates.

c. 95-90 mya



Direct ancestor of primates.

c. 88-86 mya



Primates have unique features, such as forward-looking eyes and opposable digits, which are specializations for tree-dwelling.

c. 80 mya


Simple-nosed primates

More encephalized, lost ability to produce vitamin C. 

c. 63 mya

Simple-nosed primates

Old World monkeys

Called “higher primates,” and split from New World monkeys

c. 35 mya

Old World monkeys


Apes lost tails, became more encephalized and intelligent, have tricolor vision. 

c. 34.5-29 mya


Great apes

Male-dominated, most intelligent primates.

c. 14 mya

Great apes

African great apes

Evolved in African isolation.  Ground-dwelling by day.

c. 12 mya

African great apes

Chimpanzee and human line

Gorillas split from line.

c. 10-7 mya

Chimpanzee and human line

Human line

Chimpanzee and human line split.

c. 5-7 mya

Human line


Possible direct human ancestor.  Smaller canines probably meant reduced male conflict.

c. 4.4 mya

Human line


Possible direct human ancestor.  Walked upright.

c. 4.1 mya

Gracile australopithecines

Homo habilis

First member of genus Homo.

c. 2.3 mya

Homo habilis

Homo erectus

First Homo species to widely migrate past Africa.  Used fire.  Inventors of Acheulean stone tool technology.  The first hunter-gatherers.

c. 2.0-1.8 mya

Homo erectus

Homo heidelbergensis

May have been first humans to bury their dead.

c. 1.3 mya-600 kya

Homo heidelbergensis

Homo sapiens

First anatomically modern humans.

c. 200 kya

Homo sapiens

Behaviorally modern humans

Became founder population of today’s humanity.  Replaced/displaced all other humans.

c. 60-50 kya

A table like the above one is here

Human Event Timeline Until Europe Began Conquering Humanity



 Likely or Known Location

Global Human Population

First stone tool made

c. 3.4-3.3 mya

East Africa


First control of fire

c. 2.0-1.0 mya

East Africa


Appearance of Homo erectus

c. 2.0-1.8 mya

East Africa


First migration from Africa

c. 2.0-1.9 mya

Across Asia, then Europe by 1.5 mya


First Mode 2 (Aurignacian) stone tools made

c. 1.7 mya

East Africa


Appearance of Homo heidelbergensis

c. 1.3 mya-600 kya

Africa, and soon migrated to Eurasian vicinity


Appearance of stone-tipped weapons

c. 500 mya

South Africa


Neanderthal descent from Homo heidelbergensis

c. 500 mya

Europe and West Asia


Appearance of thrown weapons

c. 400 kya



Neanderthal invention of Mode 3 (Mousterian) tools

c. 300 kya



Appearance of Homo sapiens

c. 200 kya

East Africa


First heat-treated stone tools

c. 170 kya

South Africa – first seashore human community yet discovered


First bedding and complex tool-making processes

c. 75 kya

South Africa


First needle, and perhaps the first arrowheads

c. 60 kya

South Africa


Behaviorally modern humans appear and a group of about 300 leave Africa and colonize the rest of Earth

c. 60-50 kya

East Africa

c. 5,000

Humans reach Australia, and megafauna quickly go extinct

c. 48-46 kya

Australia, via boat


Humans begin invading Europe

c. 45-40 kya

Via southeast Europe


First cave paintings made

c. 40 kya



First fisherman appears

c. 40 kya



Dog domesticated

c. 33-15 kya

East-central Asia


Mode 4 (Châtelperronian) stone tools invented

c. 30 kya

Europe and West Asia


Humans begin hunting mammoths

c. 29 kya

Eastern Europe


Neanderthals go extinct

c. 30-27 kya

Southern Europe is their last refuge


First known inter-human violent conflict

c. 25 kya



Pottery invented

c. 20 kya



Mode 5 (microlith) tools invented

c. 17 kya



Humans reach the Americas, and megafauna quickly go extinct on both continents

c. 15-11 kya

Via Siberia-Alaska (15 kya by boat, 11 kya by land)


Pig domesticated

c. 15 kya

Tigris watershed


Nuts first made into human staple

c. 13.5 kya

The Levant


First sedentary village established

c. 13.5 kya

Euphrates watershed became first agricultural settlement about 11 kya.


First known mass slaughter of humans

c. 13 kya



Slavery “invented”

c. 11 kya

Wherever sedentary populations appeared


Humans reach Mediterranean islands and megafauna quickly go extinct

c. 11-9 kya

Mediterranean periphery


Blond hair appears

c. 11 kya

Northern Europe


Blue eyes appear

c. 10-6 kya

Baltic states region


Cattle domesticated

c. 10.5 kya

Near Anatolia


Goat domesticated

c. 10 kya

Today’s Iran

c. 5 million

Agriculture begins in Americas

c. 10-8 kya



First city-sized settlement

c. 9.5 kya



Agriculture begins in China

c. 9-8 kya



Hook-and-line fishing invented

c. 8 kya

Eurasia, Western Hemisphere


Plow invented

c. 7 kya

Fertile Crescent


First city established

c. 5400 BCE



First metal smelted: copper

c. 5000 BCE



Sailboat invented

c. 5000 BCE



Writing invented

c. 5000 BCE

Eastern/Southern Europe


Humans begin populating Caribbean islands, and megafauna quickly go extinct

c. 4500 BCE

Caribbean periphery


Mass warfare begins

c. 4000 BCE


c. 7 million

White skin appears

c. 4000 BCE

Northern Europe


Horse domesticated

c. 4000 BCE

Steppe region north of Black Sea


Humans arrive at Saint Paul Island, and isolated dwarf mammoths quickly go extinct

c. 3800 BCE

Saint Paul Island


Wheel invented

c. 3500 BCE

Mesopotamia or Europe


Bronze invented

c. 3300 BCE

Fertile Crescent


Harappan civilization appears

c. 3300 BCE

Indus River Valley


Egyptian civilization appears

c. 3100 BCE

Nile River Valley


Rice paddy system invented

c. 3000 BCE



Camel first domesticated

c. 3000 BCE

East Africa or Arabian Peninsula


First literate civilization

c. 3000 BCE

Sumer, in Mesopotamia

c. 15 million

Polynesian expansion begins

c. 3000-1000 BCE



Construction of necropolis at Giza

c. 2570-to-2470 BCE

Nile River Valley


Humans arrive at Wrangel Island, and the last mammoths on Earth quickly go extinct

c. 2500-2000 BCE

Wrangel Island


Egypt’s Old Kingdom ends

c. 2200 BCE

Nile River Valley


First civilization becomes depopulated

c. 2000 BCE

Mesopotamia, and environmental refugees disperse.  Intense deforestation of the region from Morocco to Afghanistan commences.  Today, only about 10% of that forest remains, and much has turned to desert.

c. 27 million

Bronze Age civilizations rise and collapse

c. 2700-to-1150 BCE

Mediterranean and periphery, including Egypt


Harappan civilization collapses

c. 1800-to-1700 BCE

Indus River Valley


Olmec civilization appears

c. 1600-1500 BCE


c. 38 million

Egyptian civilization at its height

c. 1350

Nile River Valley


First iron age begins

c. 1300 BCE

Anatolia, Balkans, or Caucasus


Trojan War fought

c. 1200 BCE

Mediterranean shore of Anatolia


Peak influence of Phoenician civilization

c. 1200-to-800 BCE

Eastern Mediterranean, Levant


Bantu Expansion Begins

c. 1000 BCE

Equatorial Africa

c. 50 million

Madagascar discovered, and megafauna quickly go extinct

c. 1000 BCE

Madagascar, via Africa


Rome founded

c. 750 BCE

Italian Peninsula


Assyria destroys Kingdom of Israel

c. 722 BCE

The Levant


Greece begins to recover from collapse of Mycenaean civilization

c. 700 BCE



Gautama Buddha born

c. 560-480 BCE

Today’s Nepal


Athens enters its classic phase

 508 BCE



First Mesoamerican state appears

c. 500 BCE



Victory in 50-year-war with Persia marks height of classic Greek civilization

449 BCE



War with Sparta, and devastating epidemic, marks decline of Athens

431-to-404 BCE



Alexander the Great conquers numerous civilizations with a military prowess unsurpassed until industrialized warfare

336-to-323 BCE

Eastern Mediterranean to India


Watermill invented, probably by Greek engineers

c. 300-250 BCE



Rome begins first war with Carthage

264 BCE

Mediterranean periphery


Paper invented

c. 200 BCE



Rome destroys Carthage and Corinth, enslaving the survivors

146 BCE

Northern Africa and Greece


Roman civil wars begin that end the republic

c. 133 BCE

Mediterranean periphery


Defeat of Mark Antony and Cleopatra mark end of Roman republic and beginning of Roman empire

31 BCE

Mediterranean near Greece


Jesus born

c. 7-4 BCE

Today’s Israel

c. 170 million

Rome invades Great Britain

43 CE (all subsequent dates in this table are CE)

Island of Great Britain


Windmill and steam engine invented

c. 50

Roman Egypt


Rome defeats Second Jewish revolt against Roman rule


Today’ Israel


Moche culture appears

c. 100



Antonine plague ravages Roman Empire, kills two emperors, and marks end of Peace of Rome


Mediterranean periphery


Plague of Cyprian scourges Rome


Mediterranean periphery


Polynesians discover Hawaii

c. 300-800

Hawaiian islands


Christianity becomes Rome’s state religion


Mediterranean periphery


Roman imperial capital moved to Constantinople




Horse collar invented

5th century



Rome falls to Germanic tribes


Italian Peninsula


Teotihuacan declines from drought

c. 535

Valley of Mexico


Plague of Justinian kills up to half of Europe


Mediterranean periphery

c. 200 million

Muhammad born

c. 570

Levant or Arabian Peninsula


Cahokia settled

c. 600

North America, on Mississippi River


Arabs begin enslaving Africans

c. 650

African periphery, other than equatorial West Africa


Islamic Moors invade Iberian Peninsula


Iberian Peninsula


Mayan civilization collapses

c. 750-to-950



Viking expansion

c. 787-to-early-1000s

Northern Europe, North Atlantic, North America, Eastern Europe


Medieval Warm Period begins

c. 800



European watermills begin great proliferation

c. 1000

Western and northern Europe


Chinese horse collar used in Europe

c. 1000


c. 400 million

Compass used for navigation

c. 1040



England conquered from France, and peasantry begins dispossession




Christian conquest of Toledo results in Greek teachings being reintroduced into Europe


Iberian Peninsula


First Crusade begins


Europe to Levant


Angkor Wat completed

c. 1150


Fourth Crusade sacks “ally” Constantinople




Albigensian Crusade begins


Southern France


Rise and fall of Mongol empire


China to Europe


Mexica people arrive in Valley of Mexico, later known as Aztecs

c. 1248



Medieval Warm Period ends

c. 1250



Maoris discover New Zealand and drive megafauna to extinction in about a century, maybe less

c. 1250-1300

New Zealand


Queen Eleanor driven from Nottingham by cloud of coal smoke




Series of European famines mark prelude to Little Ice Age




England and France begin more than 100 years of warfare


England and France


Black Death sweeps Old World

c. 1338-1350



Renaissance begins, rise of humanism in Europe

Late 1300s

Northern Italian Peninsula


Cahokia abandoned, probably due to environmental overtaxation, Mississippian civilization begins its decline

c. 1400

North America, on Mississippi River


China mounts naval expeditions in Indian Ocean and in Pacific Ocean near Southeast Asia


Periphery of Indian Ocean and Southeast Asia


Portugal begins sailing the Atlantic Ocean


Atlantic Ocean


Aztecs form the Triple Alliance that dominates the Valley of Mexico




Portugal initiates new era of slavery with captured Africans


Iberia and West Africa


Incan expansion begins




Printing press invented

c. 1439



Ottoman conquest of Constantinople




Portuguese naval expedition crosses the southern tip of Africa. 


South Africa


Columbus stumbles into Western Hemisphere, and European conquest of humanity begins.


Bahaman and Caribbean islands

c. 500 million


Human Event Timeline Since Europe Began Conquering Humanity



 Likely or Known Location

Global Human Population

Columbus returns to Caribbean with invasion force


Island of Española


First gold strike on Española, initiating century-long quest for gold in Western Hemisphere.


Island of Española


Portuguese Vasco da Gama expedition returns after expedition reaches India by sailing around Africa


African and South Asian periphery


Portugal launches military expedition to conquer spice trade


African and South Asian periphery


Martin Luther begins the Reformation




Spanish conquest of Aztecs provides greatest proselytizing opportunity ever for the Catholic Church, the same year that it condemns Martin Luther




Magellan expedition is first to circumnavigate Earth




Spain invades Incan Empire




Henry VIII kicks Catholic Church out of England




English ironworks established for the first time since the Roman invasion


Island of Great Britain


First works of modern science published




Michael Servetus burned at the stake for his “heresies” in Protestant Geneva




The Spanish crown goes bankrupt, in the first of several bankruptcies that mark its imperial decline




The Inquisition begins banning “heretical” books




French Wars of Religion begin




Spanish establish permanent presence in Philippines


Philippines Islands


Dutch revolt against Spanish rule begins




Portuguese nobility, including its king, annihilated by Moors when they invade north Africa


North Africa


Francis Drake returns from pirate expedition that circumnavigates Earth, and becomes England’s richest private citizen




Spanish armada destroyed engaging the English and Dutch


England’s periphery


Giordano Bruno burned at the stake for his heresies




English East India Company founded




Dutch East India Company founded




English make first visit to New England, and note the prodigious forests that could be used for sailing ship masts 


New England


King James I campaigns against smoking tobacco




English establish Ulster Plantation


Today's Northern Ireland


English establish Jamestown


Today’s Virginia


French establish Montreal


Today’s Quebec


Dutch establish Jakarta


Today’s Indonesia


English establish Plymouth


Today’s Massachusetts


Rembrandt van Rijn opens his first studio

c. 1624

The Netherlands


Dutch establish Fort Amsterdam


Manhattan Island


Galileo Galilei forced to recant his scientific findings by the Inquisition




English civil wars begin




The Maunder Minimum marks the heart of the Little Ice Age

c. 1645 to 1715



Thirty Years’ War ends




Western Hemisphere’s population about nine million, down from 30-100 million in 1491, for history’s greatest demographic catastrophe


Western Hemisphere

c. 500 million

English and Dutch begin series of wars




Dutch establish Cape Town


South Africa


Isaac Newton invents calculus




Antonio Stradivari begins making violins




War between France and Netherlands ends, marking the decline of Dutch power




Scotland formally unites with England to become Great Britain


Island of Great Britain

c. 600 million

Abraham Darby founds first successful iron-smelting operation based on coal




Thomas Newcomen builds first commercial steam engine




Voltaire imprisoned for his satirical writings




Isaac Newton loses life’s fortune speculating in the slave trade




Roller Spinning machine for cotton patented, soon followed by many other models




Abolition movements begin in Europe

c. 1750



Great Britain wins first global war, defeating France


Europe, North America, Asia


Great Britain begins conquering India, beginning with Bengal




First British-induced famine hits Bengal




James Cook “discovers” Australia



c. 800 million

James Cook nearly reaches Antarctica, turned back by ice




James Watt installs first commercial application of his steam engine




Adam Smith publishes first work of classical political economy




French-assisted American Revolution begins


Eastern North America


Antoine Lavoisier falsifies phlogiston theory of combustion




James Cook “discovers” Hawaii


Hawaiian islands


George Washington crafts plan to steal North America from its natives.


Eastern North America


First steamboat built




French Revolution begins




Mozart dies, marking the beginning of the end of Classical Period in music




Cotton gin patented




Great Britain unites with Northern Ireland to become the United Kingdom ("UK")



First steam powered railroad built



c. 1 billion (estimated to have happened between 1800 and 1810)

Napoleon defeated at Waterloo


Today’s Belgium


First photograph made




Sadi Carnot publishes first work on thermodynamics




The USA steals more than half of Mexico


Western North America


The UK invades China under principles of “free trade”; first use of steam-driven naval ships in warfare




Charles Dickens publishes A Christmas Carol




Ignaz Semmelweis pioneers sanitary medical practices




American whaling peaks


Global ocean


Karl Marx publishes his Communist Manifesto




California Gold Rush begins




Herman Melville publishes Moby-Dick




The USA invades Japan




First industrial war begins




Darwin publishes Origin of Species




First commercial oil well drilled in the West




The USA’s Civil War begins




John Rockefeller enters oil industry




The USA’s transcontinental railroad is completed




John Rockefeller’s empire controls 95% of the USA’s oil refining, and  Rockefeller soon becomes history’s richest human




Thomas Edison publicly demonstrates incandescent lighting


Menlo Park, Edison, New Jersey


Final large massacre of American Indians


South Dakota


Vincent van Gogh dies, marking the waning of the post-impressionism era




Nikola Tesla’s alternating current technology wins “war” with Edison’s direct current




Americans overthrow Hawaiian monarchy




The USA steals last remaining shreds of Spain’s empire


Caribbean, Philippines


Wright brothers first fly


North Carolina


Ford Motor Company established




Panama gains “independence” via robber baron swindling of the USA’s government




Tesla loses funding for his free energy tower




Albert Einstein publishes first paper on relativity




Mark Twain publishes King Leopold’s Soliloquy to protest the “philanthropic” genocide in the Congo




Method developed to artificially fix nitrogen




Greatest international balance of payment difference is between the UK and India


UK and India


Winston Churchill begins converting the British Navy from coal to oil




Income tax amendment and Federal Reserve Act passed




World War I begins




Company controlled by notable “philanthropist” John Rockefeller uses machine guns on striking coal miners




Einstein publishes general theory of relativity




Russian Revolution




World War I ends, and oil-rich Ottoman Empire is dismembered by imperial nations




First confirmation of general theory of relativity


South America and Africa


Modern quantum theory invented



c. 2 billion (reached in 1927)

Hitler publishes Mein Kampf and lauds Henry Ford for his anti-Jewish publications




Public relations campaign to addict American women to tobacco begins


New York


Great Depression begins with stock market crash


USA, then the world


Fluorine ion discovered as cause of tooth mottling




Hitler comes to power




Attempted White House coup




The American Medical Association helps provide “scientific” evidence to promote tobacco smoking




World War II begins




World War II ends with nuclear weapons dropped on cities




Post-war boom of unprecedented prosperity begins


USA, with the rebuilding West also benefitting


Communist Revolution begins




Roswell UFO incident




National Security Act passed, CIA founded




Public relations campaign begins for putting fluoride ion in water supply as tooth “medicine.”




Transistor invented




The CIA begins overthrowing elected governments on behalf of corporate interests




The American Medical Association stops promoting tobacco smoking in its journal




Sputnik launch begins space race


Soviet Union


Revolution overthrows American-friendly dictatorship



c. 3 billion (reached in 1960)

World War III narrowly averted


Cuba, Soviet Union, USA


John Kennedy murdered




The USA invades Southeast Asia


Southeast Asia


Apollo 11 lands on the moon


The Moon


Peak oil production reached




West’s first oil crisis marks end of post-war boom; American energy consumption and wages peak and declined afterward


Earth, USA

c. 4 billion (reached in 1974)

The USA lures the Soviet Union into invading Afghanistan




Three Mile Island nuclear accident




The American Medical Association’s retirement fund is discovered to have more than $1 million invested in tobacco farms




Revolution overthrows the USA’s puppet dictator




The American Medical Association's board members are discovered to own tobacco farms




Chernobyl nuclear disaster


Soviet Union


The USA uses the threat of trade sanctions to open Asian markets to tobacco companies, primarily to addict their women and children, using familiar “free market” principles


Taiwan, South Korea, Japan, and Thailand

c. 5 billion (reached in 1987)

Microsoft makes its initial public stock offering, with Bill Gates soon becoming Earth’s richest human




Soviet Bloc begins fragmenting, Berlin Wall falls


Eastern Europe, Berlin


The USA attacks Iraq to begin invasion of the oil-rich Middle East


Iraq, Kuwait


The Soviet Union collapses


Soviet Union


Internet revolution begins

c. 1996

Industrialized nations

c. 6 billion (reached in 1999)

Terror attacks on September 11




The USA invades Afghanistan




The USA invades Iraq, with imperial nation assistance




Peak oil production reached

c. 2006



Financial panic


Industrialized nations


Gulf oil spill


Gulf of Mexico


Fukushima nuclear disaster



c. 7 billion

Humanity’s Epochal Events

Energy Epoch (see data derived here)

Primary Energy Sources

Approximate time when pristine instance of event began

Input – multiple of dietary calories

Energy Efficiency

Surplus energy produced

Societal attributes

Environmental effects

1: Making stone tools/ controlling fire/ growing the human brain

Scavenged/processed (cooked?) food and wood

3.4 mya for stone tools, 1-2 mya years ago for fire.

1 (see this discussion)



Hand-to-mouth, organized like chimpanzees or perhaps macaques; male dominated.

No more than any other animal, at least before fire harnessed.

2: Super-predator/hunter-gatherer

Cooked hunted and gathered food, and wood

60-50 kya




Share the kills and gathering results; fight other bands in raids, especially as territories shrink.

Anthropogenic burning alters ecosystems; megafaunal extinctions.

3.1: Subsistence agricultural

Cooked crops and wood

11 kya




Village life, beginning of social hierarchies as economic redistribution becomes more complex; initially peaceful through chiefdom phase, but organized warfare between settlements develops as states begin to form.

Plants and animals domesticated; environments around villages and herds are transformed into human-useful biota.

3.2: Advanced agricultural

Cooked professionally raised crops and wood

6 kya




State formation, literacy, economic/social/political stratification: elites appear, mass slavery, pronounced subjugation of women, professions form, including soldiers, priests and craftsmen.

Plow agriculture disturbs soils, professional deforestation, irrigation, competing predators eliminated, urban environments formed, with commensurate large ecological footprints. 

4.1: Early industrial


1700 CE




Capitalist formation, end of pronounced subjugation of women and slavery abolishedIndustrial working class appears, severed from land.  Warfare becomes industrialized.  Transcontinental, capitalist-based empires form.

Heavy mining operations, increasing air and water pollution, increase in carbon dioxide content of atmosphere, disappearing forest and natural habitats in larger areas.

4.2: Advanced industrial  

Coal, oil, and electricity

1860 CE




Women liberated, capitalists dominate states, and global wars as empires fight over controlling subject peoples and their resources.

Nature under siege.  Roads and expanding urban areas bring larger areas of nature under human control and resultant destructionConservation movements begin.

4.3: Industrial-technological

Oil, coal, and electricity, with nuclear power producing some of it

1950 CE




Racism, sexism, and other discriminatory ideologies largely overturned in imperial heartlands, but exploitation exported to subject peoplesWarfare to secure energy resources becomes more intense, but due to threat of nuclear weaponry, warfare is not waged between industrialized nations, but against resource-rich but industrially poor nations.  As oil runs out, the standard of living in industrialized nations declines.

Nature largely banished from urban environments, humanity’s ecological footprint encompasses the entire planet, species extinctions accelerate at biosphere-threatening rates, and human-induced climate change becomes dramatic.  As oil runs out, increasingly marginal sources are exploited, with resultant accidents.

5: Free energy

Zero-point field


Virtually unlimited

Relatively unimportant

1,000 or 10,000 or 100,000 or more, reaching a "Type 1 Civilization"

End of scarcity-based ideologies.  End of hierarchical societiesEnd of urban societiesRace disappears.  With the end of scarcity comes the end of warHeaven on Earth – or do we blow Earth up?

Exploitation of nature no longer necessary to improve human standard of living.  No more destructive mining of materials or water tables, and the end of air and water pollution.  Nature reclaims ecosystems, ideally with human assistance. 


Humanity’s First Epochal Event(s?): Growing our Brains and Controlling Fire

Chapter summary:

When that likely human ancestor made the first stone tool, it was the culmination of a process of increasing encephalization and manipulative ability that probably began its ascent with the appearance of apes and accelerated when humanity’s ancestors became bipedalStudying great apes today and applying those findings to humanity’s ancestors is problematic, but there has probably not been significant evolution in great apes since they descended from the last common ancestor that they shared with humans, particularly chimpanzees.  About one mya, bonobos split from other chimpanzee populations and became a separate species, but for many years scientists did not realize it.  Another chimpanzee split about 1.5 mya created east and west chimp species that are virtually indistinguishable today.  It is widely considered to be very likely that the last common ancestor of chimps and humans looked like a chimp.[466]

Other than humans, rhesus macaques are Earth’s most widespread primates, and both species are generalists whose ability to adapt has been responsible for their success.  Rhesus macaques are significantly encephalized, about twice that of dogs and cats, and nearly as much as chimpanzees.  Rhesus macaques have what is called Machiavellian social organization, in which everybody is continually vying for rank and power is everything.  Those with rhesus power get the most and best food, the best and safest sleeping places, mating privileges, the nicest environments to live in, and endless grooming by subordinates, whom the dominants can beat and harass whenever they want, while those low in the hierarchies get the scraps and are usually the first to succumb to the vagaries of rhesus life, including predation.[467]  It is the same energy game that all species play.  But even the lowliest macaque will become patriotic cannon fodder if his society faces an external threat, as even a macaque knows that a miserable life is better than no life at all.  The violence inflicted seems economically optimized; within a society the violence is mostly harassment, but when rival societies first come in contact, the violence is often lethal, as the initially established dominance can last for lifetimes.  Within a society, killing a subordinate does not make economic sense, as that subordinate supports the hierarchy.  Potentates rely on slaves.  The human smile evolved from the teeth-baring display of monkeys that connotes fear or submission.[468]

For all of their seeming cunning and behaviors right out of The Prince, rhesus monkeys cannot pass the mirror test; they attack their images, as they see themselves as just another rival monkey.  Chimpanzees, on the other hand, pass the mirror test, and the threshold of sentience, whatever sentience really is, may not be far removed from the ability to pass the mirror test, or perhaps humanity has not yet achieved it.  Capuchin monkeys, considered the most intelligent New World monkeys, have socially based learning, in which the young watch and imitate their elders.  Different capuchin societies have different cultures and different tool-using behaviors reflected in different solutions to similar foraging problems.[469]  Capuchins, isolated from African and Asian monkeys for about 30 million years, have striking similarities to their Old World counterparts, with female-centric societies and lethal hierarchical politics.  As with chimpanzees and humans, ganging up on lone victims is the preferred method, which increases the chance of success and reduces the risk to the murderers.[470]  Unlike rhesus monkeys, for instance, capuchin males can help with infant rearing, but they will also kill infants that they did not father, as rhesus, chimpanzees, and gorillas also do (that behavior has been observed in 50 primate species).[471]  Those comparisons provide evidence that simian social organization results from the connection between simian biology and environment; their societies formed to solve the problems of feeding, safety, and reproduction.

Chimps and orangutans have distinct cultures and ways of transmitting knowledge, usually confined to observation.  They have regional variations in tool use, and orangutans can display startling intelligence in captivity that is not witnessed in the wild, which may be like country bumpkins moving to the city where they can develop their intellects or get a chance to use them.[472]  Chimps can negotiate, deceive, hunt in ranked groups, learn sign language, use more than one tool in a process, problem-solve, and engage in other human-like activities.  Developmentally, a chimp is ahead of a human until about age two, and chimps can also express empathy.[473]  Research has suggested that imitation (performing somebody else’s actions) and empathy (feeling what somebody else feels) are related neurologically.[474]  Humans, however, are far better than chimps in their social-cognitive skills, which brings in the "theory of mind," which is guessing what others are thinking.  This is suspected to be the key developmental trait that set humans apart from their cousins.[475] 

Many observable common aspects of today’s simians probably reflect ancestral traits predating the evolutionary splits that led to humans.  A chimpanzee’s brain is about 360 cubic centimeters (“ccs”) in size, and that gracile australopithecine that probably made those early stone tools had a brain of about 450 ccs.  That brain growth reflected millions of years of evolution since the chimpanzee line split, at least a million years of bipedal existence, and hands adapted to manipulating tools.  The cognitive and manipulative abilities of the species that made early stone tools seem to have been significantly advanced over chimps.  Below is a comparison of the skull of a modern human, and orangutan, a chimpanzee, and a macaque.  (Source: Wikimedia Commons)

The human brain weighs more than three times the orangutan's and chimpanzee's, and more than ten times the macaque's.  Beginning about 2.5 million years ago, around when the first stone tools were invented, the human line's jaws became weaker and jaw muscles were no longer attached to the braincase.[476]  Some scientists think that that change helped the human line's brain grow.

The rise of humans was dependent on numerous factors, but the most important may have been the ability to increase humanity’s collective knowledge.  If each invention during human history had to be continually reinvented from scratch, there would not be people today.  The cultural transmission of innovations was critical for growing humanity’s collective technology, skills, and intelligence.  Striking stones to fashion tools was new on Earth, and it was likely invented once, and then proliferated as others learned the skill.  The pattern of proliferation of stone tool culture in Africa supports that idea.

Those first stone tools are called pebble tools, and anthropologists have placed the protohumans who made them in the Oldowan culture (also called the Oldowan industry, or Mode 1 on the stone tool scale).  The rocks used for Oldowan tools were already nearly the shape needed and were made by banging candidate rocks on a rock “anvil,” and the fractured rock’s sharp edge was the tool.  Those first stone tool makers were largely still the hunted, not hunters, and stone edges would have been like claws and teeth that would have made scavenging predator kills easy in a way that primates had never before experienced.  Modern researchers have used Oldowan tools to quickly butcher elephants.  Sawing a limb from a predator kill and stealing it would have been quick and easy.[477]  Stone tools also crushed bones to extract marrow, and would have made harvesting and processing plant foods far easier.[478]

Below are relics of the five stone tool cultures that scientists have discovered.  (Source for all images: Wikimedia Commons)

Scientists today think that above all else, the first stone tools began humanity’s Age of Meat.  Meat is a nutrient-dense food and is highly prized among wild chimpanzees that use it as a key social tool, and male chimps have used it as payment for sex.[479]  The human brain is more than three times the size of a chimpanzee’s, but recent research suggests that the human brain’s size is normal for its body size, and great ape brains seem relatively small because their bodies became relatively large, possibly due to sexual selection that resulted from vying for mates.[480]  Humans developed relatively larger brains and relatively smaller and weaker bodies, which was