The moon is not made of cheese, the earth is not flat, and lightning may strike the same place twice. We believe these claims to be true, yet it is unlikely that most readers have personally confirmed each of them. Because it would be nigh impossible for anyone to verify all they take as true, most individuals arrive at their worldview by following the beliefs of others (often “experts”). While there can be good reason to accept an idea based on its popularity, this consensus heuristic must be used with care. There must be a sufficient number of others who did arrive (and continue to arrive) at the same conclusion through independent verification and testing. When this condition is not met, the results can be catastrophic (recall the Challenger disaster). Instead of independent observers arriving at the same conclusion, we risk an information cascade. This failing goes by many names—argumentum ad populum, groupthink, the “bandwagon effect”—but its function is the same: increasing numbers of people will buy into an idea simply because many others already believe it.
Consensus, in and of itself, is not necessarily a bad thing. The more easily testable and verifiable a theory, the less debate we would expect. There is little disagreement, for example, about the sum of one plus one or the average distance of the earth from the sun. But as a question becomes more complex and less testable, we would expect an increasing level of disagreement and a lessening of the consensus—think: the existence of god, the best band since the Beatles, or the grand unified theory of physics. On such topics, independent minds can—and should—differ.
We can use a simple formula to express how an idea’s popularity correlates with its verifiability. Let us introduce the K/C ratio—the ratio of “knowability,” a broad term loosely encapsulating how possible it is to reduce uncertainty about an idea’s correctness, to “consensus,” a measure of the idea’s popularity and general acceptance. Topics that are easily knowable (K ~ 1) should have a high degree of consensus (C ~ 1), whereas those that are impossible to verify (K ~ 0) should have a low degree of consensus (C ~ 0). When the ratio deviates too far from the perfect ratio of 1, either from too much consensus or too little, there is a mispricing of knowledge. Indeed, in cases of extreme deviations from the perfect ratio, additional support for a concept with such a lopsided K/C ratio increasingly subtracts from its potential veracity. This occurs because ideas exist not simply at a single temporal point, but rather evolve over the sweep of time. At the upper reaches of consensus, there is less updating of views to account for new information—so much so that supporters of the status quo tend to suppress new facts and hypothesis. Government agencies deny funding to ‘sham’ scientists, tenure boards dissuade young researchers from pursuing ‘the wrong’ track, and the establishment quashes ‘heretical’ ideas. Too high consensus (skewed K/C ratio) inhibits the ability of an idea to evolve towards truth.
To see how this works in practice, we turn to the evergreen topic of climate change. Notwithstanding the underlying ecological threat of climate change itself, the debate about how to confront man made global warming has spawned unprecedented financial, political, and social risks of its own. Entire industries face extinction as the world’s governments seek to impose trillions of dollars of taxes on carbon emissions. The New York Times’s Thomas Friedman approvingly writes that Australian politicians—not to mention public figures through the world—now risk “political suicide” if they deny climate change. But if carbon dioxide turns out not to be the boogey-man that climate scientists have made it out to be, tens of trillions will be wasted in unneeded remediation. Much of the world—billions of human beings—will endure a severely diminished quality of life with nothing to show for it. The growth trajectory of the world in the twenty-first century may well depend more on the “truth” of climate change ex ante than ex post.
With climate change, as in many areas of scientific complexity, we can (and do) use models to understand the world. But models have their problems. This is particularly true when dealing with complex, non-linear systems with a multitude of recursive feedback loops, in which small variations produce massive shifts in the long-term outcome. Pioneered by the mathematicians Edward Lorenz and Benoit Mandelbrot, chaos theory helped explain the intractability of certain problems. Readers of pop science will be familiar with the term “butterfly effect” to describe systems in which “the flap of a butterfly’s wings in Brazil set[s] off a tornado in Texas.” The earth’s climate is one such dynamic, chaotic system and it is within the whirling, turbulent vortex of unpredictability that the modern climate scientists must tread. And boldly have they stepped into the breach. The scope of agreement achieved by the world’s climate scientists is breathtaking. To first approximation, around 97% agree that human activity, particularly carbon dioxide emissions, causes global warming. So impressed was the Norwegian Nobel Committee by the work of the Inter-governmental Committee on Climate Change and Al Gore “for their efforts to build up and disseminate greater knowledge about man-made climate change, and to lay the foundations for the measures that are needed to counteract such change” that it awarded them the 2007 Nobel Peace Prize. So many great minds cannot possibly be wrong, right?
Yet something nags us about this self-congratulatory consensus. Our intuition is that this narrow distribution of opinions yields a knowability to consensus ratio far removed from the perfect ratio of 1. To reach their conclusions, climate scientists have to (a) uncover the (historical) drivers of climate, (b) project the future path of these inputs and others that may arise, and (c) predict how recursive feedback loops interact over multi-decadal time horizons, all without being able to test their hypotheses against reality. When evaluating the causes of past climate shifts, for example, scientists cannot simply re-run history to test the impact of changing different variables. Similarly, although climate scientists can make testable hypotheses about the future, their short-term predictions have an embarrassing record (think post-Katrina predictions of a massive surge in US hurricanes or the failed attempts to forecast temperature changes for the 2000s), while the debate will be moot by the time we can test their long-term forecasts in the year 2100.
We would, therefore, expect this limit on empirical verifiability to birth widely divergent views on the path, causes, and consequences of earth’s future climate. In other arenas, only after a theory has been empirically verified has the scientific community coalesced around it. Even then, scientists continue to subject such theories to rigorous testing and debate. For example, consider the current state of theoretical physics: quantum physics, loop quantum gravity, string theory, super-symmetry, and M-theory, among others, all vie for acceptance. Albert Einstein’s general relativity itself did not begin to garner widespread support until four years after its publication, when Arthur Eddington verified its predictions during a 1919 solar eclipse. Even so, as illustrated by the rash of headlines in late 2011 announcing the (false) discovery of faster-than-light neutrinos, scientists continue to try to poke holes in Einstein’s theory.
Yet the expectation of a rich debate among scientists about climate change does not reconcile easily with the widely endorsed shibboleth that human activity will warm the globe dramatically and dangerously over the next one hundred years. As climate scientists are themselves fond of repeating, the vast majority have arrived at the exact same conclusions about both past warming and future trends. Any discussion that doubts the fundamental premises of climate change is dismissed by the mainstream media and climate scientists as pseudo-science conducted by quacks or ideologues. Thus, questions about observational biases in the location of temperature stations, changes in the earth’s albedo, the cooling effect of dust particles, shifting ocean cycles, fluctuating solar activity, correlation v. causation of historical warm periods and carbon dioxide, catastrophic model failure caused by chaotic interactions, and innumerable other theories—most of which are presumably wrong—are never properly mooted in the public debate.
In our view, the fact that so many scientists agree so closely about the earth’s warming is, itself, evidence of a lack of evidence for global warming. Does this mean that climate change is not happening? Not necessarily. But it does mean that we should be wary of the meretricious arguments mustered in its defense. When evaluating complex questions—from climate change to economic growth, physiology to financial markets—it is worse than naïve to judge the veracity of an idea merely from the strength of consensus. The condemnation of Galileo Galilei meant one man served a sentence of life imprisonment. His ecumenical accusers at least acknowledged a force greater than science drove their decision. The modern priests of climate change endanger the lives of billions as they wield their fallacy that consensus is truth.
D. RYAN BRUMBERG
 It is important to be specific about the exact question under evaluation and to align it with the respective levels of knowability and consensus. Take, for example, the Theory of Gravity. When the term “Theory of Gravity” refers to Newtonian physics ( ), knowabilty is high because scientists can accurately test it (for non-quantum / nonrelativistic situations). We therefore expect a high level of consensus, and observing this, assign a high expected veracity to Newtonian gravity. By contrast, when the term “Theory of Gravity” refers to the (hard to verify) attempts to reconcile general relativity with quantum mechanics, we assign lower knowability and expect less consensus. Observing the theoretical physics landscape, we observe many clashing theories—implying independent, competing attempts by scientists to explain gravity. Over time, assuming knowability increases (larger particle accelerators), we would expect consensus to increase, and with it, overall expected veracity. Were the opposite true—a skewed K/C ratio—we would worry about groupthink and the repression of new theories in the future.
 We use the terms “climate change” and “global warming” interchangeable throughout the paper to refer broadly to the concept that humans, primarily through the emission of carbon dioxide, are causing dangerous warming of the planet.
 This is the percentage frequently bandied about by the media and referred to by climate scientists. The actual level may differ somewhat, but a casual glance through scientific and media reports will illustrate the near unanimity of mainstream consensus—at least for the high level claim that humans are dangerously warming the globe through the emission of carbon dioxide (and other greenhouse gases). For a good example, see Wikipedia: http://en.wikipedia.org/wiki/Scientific_opinion_on_climate_change
- The Paradox of Consensus – a novel argument on climate change (wattsupwiththat.com)
- There is no climate change “debate” (leftandcenter.com)
- Scientific Consensus On Anthropogenic Climate Change (prn.fm)
- Climate Change: How much agreement do we have from the subject matter experts? (skeptical-science.com)