NEGLIGENCE,
NON-SCIENCE, AND CONSENSUS CLIMATOLOGY
By Patrick
Frank
For the entire text, graphs and references: http://multi-science.atypon.com/doi/abs/10.1260/0958-305X.26.3.391
Palo Alto, CA , USA. Email:
pfrank830@earthlink.net
ABSTRACT
The
purported consensus that human greenhouse gas emissions have causally dominated
the recent climate warming depends decisively upon three lines of evidence:
climate model projections, reconstructed paleo-temperatures, and the
instrumental surface air temperature record. However, CMIP5 climate model
simulations of global cloud fraction reveal theory-bias error. Propagation of
this cloud forcing error uncovers a r.s.s.e. uncertainty 1σ ≈ ±15 C in centennially projected air
temperature. Causal attribution of warming is therefore impossible. Climate
models also fail to reproduce targeted climate observables. For their part,
consensus paleo-temperature reconstructions deploy an improper ‘correlation =
causation’ logic, suborn physical theory, and represent a descent into
pseudoscience. Finally, the published global averaged surface air temperature
record completely neglects systematic instrumental error. The average annual
systematic measurement uncertainty, 1σ
=
±0.5 C, completely vitiates centennial climate warming at the 95% confidence
interval. The entire consensus position fails critical examination and
evidences pervasive analytical negligence.
Keywords: Climate, systematic error, GCM, proxy, air
temperature, pseudoscience
1. INTRODUCTION
The modern concern
about human-caused global warming dates approximately from the 1979 Charney
Report to the US National Research Council. [1] The Charney committee described
how carbon dioxide (CO2) and other greenhouse gas (GHG) emissions
may influence climate, but did not acknowledge the contemporary scientific
debate about the magnitude of any effect. [2-10] In 1989, the US Environmental
Protection Agency warned of myriad disasters to ostensibly follow CO2 emissions;
[11] a pessimism that has commandeered the modern consensus. [12, 13]
The consensus that human CO2 emissions
are dangerous rests upon three central elements of contemporary climatology:
the climate modeling that imputes physical causality into recent air
temperature trends, proxy reconstructions of paleo air temperatures, and the
instrumental record that provides the surface air temperatures.
Results from all
three have been combined to conclude that the rise in global averaged surface
air temperature (GASAT) since about 1880 is unprecedented, is dangerous, and is
caused by industrial GHG emissions. [12-16]
In this paper, climate models, proxy paleo-temperature
reconstructions, and the surface air temperature record are critically examined
in turn. They are each and all found to neglect physical error, or in the case
of consensus paleo-temperature reconstructions to neglect physics itself. The
normative certainties flourish on this neglect.
2. RESULTS
AND DISCUSSION
2.1 General
Circulation Models
On 23 June 1988
the US Senate Committee on Energy and National Resources hosted testimony on,
“The Greenhouse Effect and Climate Change.” Figure 1a shows a central element
of this testimony: three alternative GASAT projections extending to the year
2020, simulated using the Goddard Institute for Space Studies (GISS) climate
Model II. In scenario “A,” surface air temperature was driven by a future rate
of global CO2 emission that was increased beyond the 1988 rate, in
“B” the 1988 rate continued unabated, and in “C” the 1988 rate was drastically
curtailed. Since then, whether the GISS Model II scenario B correctly predicted
the post-1988 global air temperature trend has been a matter of discussion.
[17-20]
Figure 1a, as it was presented to Congress and as it
appeared in the original peerreviewed paper, has no uncertainty bars. [21]
However, “even in high school physics, we
learn that an answer without “error bars” is no answer at all.” [22] The
missing part of the GISS Model II answer is described next.
Figure 1: a. (points), the GISS Model II projections of
future global averaged surface air temperature anomalies for scenario A, B, and
C as presented in 1988 (see text).
[21,
34] The lines were calculated using eq. 1 and the original forcings but without
volcanic explosions (FCO2
= 0.42; F0 = 33.946
Wm-2). b. confidence intervals (CIs)
obtained
using eq. 1 to propagate the ±4 Wm-2 annual average CMIP5
tropospheric cloud forcing error through the same projected scenarios (see
2.1.3). [30]
2.1.1 Cloud error
It is very well known that climate models
only poorly simulate global cloud fraction, [23-27] among other observables.
[28] This simulation error is due to incorrect physical theory. [29, 30] Cloud
error due to theory-bias means the models incorrectly partition the amount and
distribution of energy in the atmosphere. This in turn means the air
temperature is modeled incorrectly. The incorrectly simulated cloud fraction of
state-of-the-art CMIP5 climate models produces an average annual theory-bias
error in tropospheric thermal energy flux of ±4 Wm-2, [27] which
magnitude has not materially diminished between 1999 and 2012. [27, 29-33]
2.1.2
Theory-bias cloud error is a continually refreshed initial conditions error. Climate
is projected through time in a step-wise fashion. Each modeled time-step
provides the initial conditions for the subsequent step. Because of theory-bias
error, each calculational step delivers incorrectly calculated climate
magnitudes to the subsequent step, so that every step initializes with
incorrect magnitudes. These incorrect magnitudes are then further extrapolated,
but again incorrectly. In a sequential calculation, calculational error builds
upon initial error in every step, and the uncertainty accumulates with each
step. [29, 30] However, no published model projection of terrestrial air
temperatures has ever discussed or included propagated error. [30]
2.1.3 The propagated uncertainty due to theory-bias cloud error.
The GASAT anomaly projections of general
circulation climate models (GCMs) can be accurately simulated using the linear
equation:
∆T = fCO2
× 33K ×[(F0
+∑n Fi )F0
] , (1)
i=1
where ∆T is the GASAT anomaly (K), fCO2 varies
among GCMs and is the fraction of greenhouse warming due to water-vapor
enhanced CO2 forcing, 33 K is
the net unperturbed terrestrial surface greenhouse temperature, F0 is the total GHG forcing
of the zeroth projection year, and Fi
is the annual change in GHG forcing in each of “n” projection years. [30, 35] The success of this equation shows
that climate models project air temperature as a linear extrapolation of GHG
forcing. [30] In a sequential linear calculation, the final uncertainty is the
root-sum-square of the step-wise errors. [36-38] In a linear air temperature
projection, the running total of uncertainty in the simulated GASAT due to
theory-bias error is then , where ei
is the error
in the ith step, across a simulation
of “n” steps. It is a standard of
physics that predictive reliability is evaluated by propagating error. Entering
the average ±4 Wm-2 of CMIP5 cloud forcing error into eq. 1 allows
calculation of confidence intervals (CI) for the GASAT projection of any
climate model.
Figure 1b shows the CI from ±4 Wm-2 of thermal
flux error propagated through the 1988 GISS Model II global air temperature
projections (fCO2 =
0.42). The CI uncertainty increases much faster than the projected GASAT
because the ±4 Wm-2 of flux error is ±110× larger than the 0.036 Wm-2
average annual increase in GHG forcing since 1979. [39]
Scenarios A, B, and C are completely submerged within the
strongly overlapping CIs. Therefore, they are not distinguishable. None of them
can be tested by comparison against any conceivable trend in global air
temperatures. Therefore, they are not falsifiable. As the projections are
neither predictive nor falsifiable, they are physically meaningless. Analogous
“error bars” will attend any CMIP5 projection. Applying the standard criterion
of physics, CMIP5 climate models are predictively unreliable.
2.1.4 Tests of climate model simulations.
Advanced GCMs express the physical theory of
climate. All meaning in science derives from a falsifiable theory. The 2007
“Summary for Policymakers,” of the UN Intergovernmental Panel on Climate Change
(the IPCC) says, “Most of the observed
increase in globally averaged temperatures since the mid-20th century is very likely due to the observed
increase in anthropogenic greenhouse gas concentrations.” (original
emphasis), where “very likely” means
more than 90% probable. [40] The IPCC continues, “There is considerable confidence that climate models provide credible
quantitative estimates of future climate change, particularly at continental
scales and above.” [13] As the assignment of causality for present and
future climate change rests entirely on the physical accuracy of climate model
simulations, then clearly the IPCC judges GCMs able to produce accurate and
quantitative estimates of future climate changes.
2.1.4.1 Perfect Model Tests.
In a perfect model
test, a GCM is used to project a reference climate, and then evaluated by its
ability to predict that very same climate. These conditions are most favorable
to the model because a GCM is a perfect model of its own reference climate.
Typically, the predictive test starts with small offsets of the original
initial conditions to mimic imperfectly known input observables.
The CMIP3-level HadCM3 was
subjected to a perfect model test in 2002. [41] Figure 2 shows one outcome: the
correlation between the predicted and reference air temperatures dropped to
~0.25 after one year. By projection year eight, the correlation was zero.
The dashed line in Figure 2 shows
the air temperature predictions of a random persistence model, which compared
favorably with the HadCM3 through the nine projection years.Figure 2. Full line: the HadCM3 perfect model test, re-predicting its own global mean surface air temperature across nine years. Dashed line: the random persistence model. The data are from Figure 6a in [41].
The HadCM3 was also unsuccessful
predicting its own global average precipitation, and its own El Niños. Once
again, the random persistence model did as well. Nevertheless, the HadCM3 was
subsequently employed in the 2007 IPCC 4AR to predict climate futures.
In 2000, a similar perfect model test proved that the
Canadian CCCma climate model was unable to predict its own global air
temperatures. [42] Both studies concluded that even with perfect climate
models, the ability to predict global climate would be non-existent. To this
writing, GCMs have invariably failed perfect model reliability tests.
2.1.4.1.1 The general significance of a failed perfect model test.
This insight into
the impact of initial-value errors is of general significance because in a
step-wise climate projection the magnitudes of each prior climate state provide
the initial conditions of the subsequent state. The theory-bias errors of
climate models means that prior states are incorrectly represented. Therefore,
subsequent states will initialize with incorrect physical variables. Theory
bias ensures the erroneous variables will be again projected incorrectly. That
is, when theory-bias error is present an initial conditions error of unknown
magnitude is propagated into and through every single simulation time-step. The
theory-bias initial condition error can never be removed using model
equilibration or spin-up, because initial condition errors are sequentially
produced and propagated within the model itself at every single simulation
step.
2.1.4.1.2 Perfect Model Tests in the IPCC 4AR.
Chapters 8 and 9
of the 4AR discuss the physics of climate and evaluate whether climate models
are reliable enough to attribute recent air temperature warming to human GHG
emissions. [13, 43] Chapters 10 and 11 make model-based predictions about the
effects of GHG emissions on future climate.
IPCC 4AR chapters 8 and 9 should
have acknowledged the failed HadCM3 and CCCma perfect model tests. However,
they are nowhere mentioned. The author of the 2002 HadCM3 perfect model study
is cited 15 times in AR4 chapters 8-11, but the perfect model paper itself is
never cited. The reported failure of the CCCma model is also never cited, even
though Boer’s other work is extensively referenced. The very 4AR chapters that
purported to evaluate climate models was completely silent about failed perfect
model tests.
In 2008, twenty-one CMIP3-level GCMs were subjected to
perfect model tests that included, “8850
years of simulated data from the control runs of 21 coupled climate models.”
[44] These were the very same climate models the IPCC claimed could produce, “credible quantitative estimates of future
climate change.” As perfect models, the CMIP3 GCMs proved able to predict
global air temperatures for five years, but not for twenty-five years.
Precipitation was immediately unpredictable.
2.1.4.2 Real World Tests.
The 2007 4AR presented CMIP3-level hindcasts
of 20th century temperatures and precipitation simulated at the
points of a global grid. In a test comparison with known real-world
observables, the 20th century hindcasts of six GCMs were evaluated
against the known 20th century climate at 58 locations scattered
across the globe. [45, 46]
From the four grid-points surrounding each of the 58
locales, a linear combination of the hindcasted trends in temperature or
precipitation of the 20th century were fitted to the observational
record. For the temperature trends at the four surrounding grid points, i, j, k, m, the hindcasted GCM Ti j k mGCM,
, , were to fitted to the observed local temperature trend, as,
Tlobs d' = aTiGCM +bTjGCM +cTkGCM +dTmGCM (2)
where a,
b, c, and d are fitted
coefficients and always sum to unity. Eq. 2 describes an iteratively adjusted
fit calculated to make a closest possible match to the observed local
temperature. Each fit-reconstructed local temperature trend is then,
Tl fitted = a Ti iGCM
+b Tj jGCM
+c Tk kGCM
+d Tm mGCM (3)
where ai, bj, ck,
dm represent the final
best-fit coefficients. The 20th century trends of T fitted and Tlobs’d were then
compared. The methodology is valid because measured air l
temperatures correlate R ≥ 0.5 across 1200 km. [47-49] If
the climate models were reliable, then the outcome should be T fitted ≈ Tobs’d.
Figure 3 shows the results for
Vancouver, British Columbia, which warmed by a full degree in the 20 years
after 1900, then cooled by 2 degrees for 40 years, and finally warmed again to
finish almost where it started. None of the six tested climate models
reproduced this variability. Further, although each model used the same
physical theory to represent the same climate, the projected trends spread
across nearly 2.5 C.
The usual strategy of climate
prediction is to represent temperatures as anomalies. The assumption behind
this strategy is that climate model error is constant. [50-52] Thus,
constructing anomalies should subtract away the errors and uncover a physically
reliable temperature change.
The right panel of Figure 3 shows
the anomaly trends for Vancouver, BC. Once again, the observed variability of
Vancouver’s climate is not reproduced. Low-error anomalies should show similar
trends. However, the simulation anomalies disagree by nearly a full degree. The
inset shows that the HadCM3 anomaly wanders about without any particular
correspondence to the observed temperature, as expected from the failed 2002
perfect model test.
Although Vancouver cooled at an average rate of -0.05 C per
decade during the 20th century, all the climate models predicted
increasing 20th century temperatures (Table 1). These results are
typical of the entire study, which found that the 20th century CMIP3
model hindcasts were inaccurate in every region tested.
Figure 3: Left panel: (—), the 20th century
temperature trend observed at Vancouver,
British
Columbia. Dashed and dotted lines: the Vancouver hindcasts by five CMIP3
climate models (see labels). Right panel, the 20th century observed
and predicted
anomaly
trends for Vancouver (1951-1980 mean). Right panel inset: HadCM3 hindcast for
the years 1950-2000. (11-year smoothing throughout.)
However, the IPCC claim that climate models produce more
quantitatively reliable results “at
continental scales and above.” [13] This claim was also tested by extending
the CMIP3 comparison to the continental USA. [45, 53] The result was that the
hindcasts “[did] not correspond to
reality any better” on the continental scale than they did at the 58 local
scales. [54]
Table
1: Simulated and Observed 20th Century Trends for Vancouver, BC
The CIs of Figure 1b, the failed perfect
model tests (Figure 2), and the failed hindcasts (Figure 3) demonstrate that
advanced GCMs are neither predictive nor falsifiable, and are not reliable.
Their air temperature projections have no obvious physical meaning. [55] Any
attribution of the GASAT increase to human GHG emissions has been and remains
without any scientific warrant.
2.2 Proxy
paleo-temperature reconstructions
Paleo-temperature reconstruction —
paleo-thermometry — estimates the temperature of past climates. As measurements
are not available from distant times, the recovery of ancient temperatures
requires proxies. Tree-ring series dominate air temperature proxies but proxy
series typically include other annually layered temperature-sensitive bio- and geo-structures.
2.2.1 The methodological basis of tree-ring paleo-thermometry.
Trees growing in a
sub-optimum thermal climate produce annual growth rings that are narrow and/or
of low density. Climates closer to the optimum growth temperature produce trees
with relatively thicker or denser annual rings. A relationship of temperature
with tree growth is apparent, [56-58] and in principle annual tree rings should
record significant transitions of local growth conditions, including those of
climate.
Candidate trees used to
reconstruct past temperatures are those growing in a cold climate at high
latitudes or high altitudes. The candidate trees are judged to be suffering
from ‘temperature limited’ growth, following a qualitative assessment of the
surrounding environment. [58-61] This qualitative judgment entrains an untested
assumption that temperature stress has continuously dominated the growth of the
chosen trees over their mature lifetime. This assumption is absolutely central
to the entire method and rests upon the standard argument that the, “biological bases of tree growth are
essentially immutable.” [57]
However, it is known that the
genome of every tree confers highly mutable responses to stress. [62-64] This
mutability allows individual trees to survive the great variety of
environmental challenges, any of which may affect tree ring metrics. The entire
field of tree-ring paleo-thermometry is based on an insupportable claim of
immutability projected for centuries into the past to support qualitative judgments
taken in the present.
To be physically valid, a judgment
of temperature-limited growth must be based on a falsifiable physical theory of
tree-growth. Such a theory will specify the observables that are dependent upon
seasonal temperatures. In addition, the specific extraction of temperatures
from tree rings requires a physical relation, i.e., an equation that converts
tree ring metrics into degrees centigrade. However, no such theory is in
evidence. Nor is any such equation.
Failing those criteria,
semi-empirical physics might suffice. For example, controlled environment
growth experiments might establish that tree-ring isotope ratios, such as 13C/12C
or 18O/16O, are strongly correlated with contemporaneous
air temperature. Empirically established correlation equations might permit
extracting historical air temperatures from living and dead trees. However, a
specific and reliable empirical correlation between air temperatures and tree
ring isotope ratios does not yet exist. [65-68]
It is clear, therefore, that tree-ring proxy
paleo-temperatures are grounded in judgments that are invariably qualitative.
Thus the “temperature” in tree-ring paleothermometry has no quantitative
physical basis. It reduces to an assigned physical label, “Celsius,” that has
no physical meaning.
2.2.2 Applied proxy paleo-thermometry.
In any proxy
paleo-temperature study, standard statistical methods are applied to timeseries
metrics obtained from tree rings, corals, speleothems, ice cores, or other physical
surrogates. [69-71] External temperature can impact the development of each
physical system, and each is therefore termed a temperature proxy.
However, no proxy is grounded in a
reliable physical theory. Even climatological ∂O18,
the temperature proxy with the best grounding in physical theory, is confounded
by the unknown variability in the seasonal strengths and tracks of ancient
monsoons. [72, 73] Therefore, proxy series extending from the past into the
present are typically validated by statistical comparisons with their local
temperature record. [74] Local temperature records generally cover little more
than the most recent 130 years. This defines the record length over which any
proxy correlation can be tested. Proxy series that correlate with this 130-year
range are assumed to be reliable temperature indicators. Temperature indication
is then assumed to extend uniformly into the past, using the argument of
constant developmental forces.
Tree ring metrics from old dead
trees (snags) can be “wiggle-matched” in overlap regions with modern tree-ring
series and added in to produce a composite extending back centuries.
Wiggle-matching is also used to produce long-term series from other proxies. In
the absence of physical theory, and sometimes despite physical theory [75],
chosen proxy series are processed statistically, typically normalized to unit
standard deviation, and then combined, scaled into coincidence with the 20th
century temperature record, and finally awarded the label, Celsius.
The assignment of Celsius takes its entire justification
from the prior judgments of temperature-limited development. Various
statistical correlations are demonstrated to encourage a grant of confidence
that a causal connection exists between proxies and recent local temperatures.
Statistics is thus substituted for physics, is used to assign causality, and is
made to pose material time series as temperature. In a classic of scientific
non-sequiturs, the entire field of consensus proxy paleo-thermometry has
decided that correlation equals causation, and also that correlation in the
present proves causation in the past. This is physics by fiat.
2.2.3 A promising extension of consensus paleo-science.
Perhaps an analogy
can demonstrate the scientific void that is a consensus proxy paleo-temperature
reconstruction. To this end, a hypothesis is proposed that is as qualitatively
plausible as an expert judgment of temperature limited formation, and that can
likewise be justified by the full rigor of consensus statistics. The hypothesis
that attains the statistical merit of a consensus proxy paleo-thermometric
reconstruction attains the same level of causal meaning.
Beginning with the constancy
assumption analogous to consensus proxythermometry: to first-order, atmospheric
CO2 should be constant in an unperturbed Holocene climate; an
assumption that can be referenced to and supported by published studies. [76,
77]
Following from the foregoing, excursions in an otherwise
constant Holocene atmospheric CO2 imply the intensity of human
inputs from agriculture, industry, and use of fire. [43, 78, 79] This
supposition can be rationalized into the distant past in a manner directly
analogous to the consensus extrapolations of temperature proxies into past
times. For example, paleo-atmospheric CO2 may be impacted by the
stunning growth of fire used in paleo-hunting [80, 81], by the historical trend
in paleo-slashand-burn agriculture, [82] and by the documented increase in
paleo-ore smelting and lime calcining. [83-87] The spread of farming, [88] also
opened virgin paleo-soils to colonization by aerobic CO2-producing
paleo-bacteria. [89, 90]
It is plausible to deduce,
therefore, that perturbations in paleo-atmospheric CO2 may reflect
the history and intensity of human paleo-industriousness. [91] Current human
industriousness is reflected in Gross Domestic Product (GDP). If a correlation
is found to exist between modern CO2 and modern GDP, then paleo-CO2
can obviously be used to infer a paleo-GDP. This construct expresses the
full analytical rigor of consensus proxy paleo-thermometry, namely that proxy
correlations of today immutably extrapolate to the temperatures of yesterday.
Figure 4 displays the relationship
between the US Gross Domestic Product (US GDP) and the recent trend in
atmospheric CO2. The first and all-important supposition is clearly
verified: US GDP and atmospheric CO2 display a highly significant
correlation (0.995, P < 0.0001) over the years 1929 through 2012. Global GDP
also produced a very good correlation with CO2 (1913-2003; R =
0.997, P < 0.0001), thereby exhibiting its own statistical
paleo-thermometric-like power. However, only one 20th century world
GDP datum is available prior to 1950, [92] severely reducing the methodological
calibration and verification ranges.
The plausibility argument plus the
strong statistical association establishes as much causality between modern GDP
and modern atmospheric CO2 as there is between modern air
temperatures and modern proxy series.
The consensus methodological
authority that extends causality deep into the past is calibration and
verification of the proxy in the present. The standard approach, [69] is to
divide the measurement data into the ‘calibration range’ and the ‘verification
range.’
Under this protocol, 1929-1979 was
chosen as the US GDP calibration range. The proxy (atmospheric CO2)
was then regressed against the target (US GDP) over the calibration years,
producing the calibration line (Figure 4, inset a; US GDP = 0.171×[CO2]ppmv –
51.49). The correlation (R2=0.98, P < 0.0001) is well within the
halleluiah norm of consensus paleo-thermometry.
The calibration line must now
predict the verification half of the target data (US GDP, 1980-2012). With a
successful verification, GDP-CO2 causality will be demonstrated to
the professional rigor of a consensus proxy paleo-temperature reconstruction.
One can then just as confidently elaborate the relationship off into past time.
Figure 4 inset b shows that the verification was successful:
predicted US GDP correlated with
observed US GDP at the 0.99 level. When carried out in reverse, calibrating on
1980-2012 and verifying on 1929-1979, equally good results were
Figure
4. 1929-2012 trend in: (),
United States Gross Domestic Product, and; (⋅⋅⋅⋅),
atmospheric CO2,; correlation R = 0.995. Inset a: (o), calibration
1929-1979, CO2 vs.
US
GDP (see text); (),
linear least squares fit. Inset b: (o), verification 1980-2012, observed US GDP
vs. predicted US GDP; correlation R =
0.99, P<0 .0001="" egoe="" mso-bidi-font-family:="" mso-fareast-font-family:="" sans-serif="" segoe="" span="" symbol="" ui="">0>
),
linear least squares fit (R2=0.95).
obtained
(calibration R = 0.99, P<0 .001="" atmospheric="" between="" causation="" co="" confers="" consensus="" correlation="" equals="" equation="" on="" p="" paleo-thermometric="" r="0.98," relationship="" stature="" sub="" the="" this="" thus="" verification="">2 0>
and GDP.
This study is now in a strong
position to analogize from the widely accepted logic of consensus proxy
paleo-thermometry that, ‘recent proxies
correlate with recent temperatures therefore paleo-proxies measure ancient
temperatures.’
In the bright light of this
science, Figure 4 implies just as strongly that, recent CO2 correlates with recent GDP, therefore paleo-CO2
measures ancient GDP. When the correlation equation is informed with
recovered ancient CO2 anomalies, a paleo-GDP covering past times is
reconstructed. Following Mark Twain,
[93] it is now possible to extrapolate GDP off into nether historical regions
where no measureable GDP can possibly exist.
Announcing the new field of
consensus paleo-economics: the intensity of economic activity of past
continental-scale societies can now be statistically reconstructed during times
when societal GDP went unrecorded. So, for example, the paleo-CO2 from
the ice cores of Alpine glaciers can be used to illuminate and track the
paleoeconomic activity of most of ancient Europe. Historians of the Roman
Empire should note their breakthrough opportunities. [94] The snows of
Kilimanjaro record equally well the economic level of the ancient and
mysterious (until now) Kingdom of Meroe, and those of Mt. Ararat may as well
reveal the economic climate experienced by Noah. Such is consensus proxy
paleo-thermometry.
Clearly, the CO2-GDP
paleo-extrapolation is spurious. The global carbon cycle is not known to
anywhere near the required resolution, and most importantly the physical theory
of terrestrial CO2 is incomplete. [95-97] These are the same
failings that disqualify consensus proxy paleo-thermometry as science.
Nevertheless, the methodological rigor of consensus proxy paleo-thermometry is
fully in view. This light-hearted parody thus conveys a serious point:
statistics is no substitute for physics. Statistical validity does not preclude
causal vacuity. Correlation alone never equals causation. However, the
entire purportedly scientific case
for consensus paleothermometry rests upon correlation = causation. This
diagnosis follows from the purely statistical methodology.
Even further, it is physically
unwarrantable to assume a dominant and constant temperature-reflective response
operates across deep time. In the case of tree-ring series, the purely
qualitative judgment of temperature sensitivity is quantitatively fatal, and
the further assumption of biological constancy is already refuted in the
professional literature. [64, 98, 99] Absent any reliable physical theory at
any stage of analysis, consensus proxy paleo-thermometry has no physical
meaning.
This criticism is not vitiated by
the use of principal component analysis (PCA) to extract numerically orthogonal
series from proxies that have been qualitatively judged as temperature limited.
[69, 100, 101] It is not controversial that the numerically orthogonal
constructs of PCA have no distinct physical meaning. [102, 103] They are never
known a priori to represent any
physical magnitude. As far back as 1901, Spearman noted that, “an estimate of the correlation between two
things is generally of little scientific value if it does not depend
unequivocally on the nature of the things...” [104] Qualitative judgments
filtered through numerical constructs are no route to physical orthogonality,
and mere correlation with temperature does not establish a unique physical
meaning.
Even the statistical validity of consensus proxy
paleo-temperatures has been questioned, resulting in the following observation:
“Natural climate variability is not well understood and is probably
quite large. It is not clear that the proxies currently used to predict
temperature are even predictive of it at the scale of several decades let alone
over many centuries.” [105, 106]
Thus, the temperature “signal” in a physical
proxy is invisible to statistical analysis. In the sense of climate physics,
the “signal” is thus far unidentifiable. Consensus proxyreconstructed global
paleo-temperatures are often scaled in tenths of Celsius, e.g., refs.
[107-110]. Both attribution and precision are utterly unjustifiable. As
presently practiced, consensus proxy paleo-thermometry is pseudo-science. [111]
2.3 The
global averaged surface air temperature (GASAT) record
The GASAT record
is produced using the monthly temperature records from millions of individual
land and sea surface temperatures (SSTs) distributed around the world. [112,
113] Despite that a ‘globally averaged temperature’ is physically meaningless,
[114] it is nevertheless the metric widely accepted as proving that global
climate has warmed since 1880. It is also widely agreed that the GASAT has
increased by about 0.8±0.2 Celsius. [49, 115] In light of this, little doubt is
expressed about the rate and magnitude of atmospheric warming, or that they are
statistically significant. Any continued rise in the GASAT is itself taken as a
proof that continued emission of GHGs is thermally dangerous.
However, a close examination of
the global record reveals that the temperatures themselves have enjoyed a
curious and unspoken canonization. That is, the reported monthly magnitudes are
always taken at face value. The only argument booted about is whether they
pristinely represent climate, or not. It’s as though the measured values
themselves, whether correct or incorrect, are nevertheless known with perfect
accuracy.
Nevertheless, systematic
instrumental error is always present because instruments are inevitably
imperfect. [116] Systematic error is non-random, and a component of the
temperature measurement itself. Solar heating, snow albedo, and variable winds
inject instrumental errors into the modern land-based record because they
impact the stability and response of temperature sensors. The consequent
systematic error has been measured using ideal-condition calibration
experiments. [117-120] The error produced by well-maintained and well-sited
instruments is the minimal error to be expected under typical field conditions.
Likewise, the sea-surface temperature (SST) measured by ships and buoys is also
subject to large-scale systematic errors. [121-124] Despite this ubiquity,
systematic sensor measurement error is completely neglected in the published
record of global averaged surface air temperatures. [125]
Figure
5: Systematic temperature measurement error found during calibration tests of:
a. a platinum resistance thermometer inside a Cotton Regional Shelter
(Stevenson
Screen), σ = ±0.53 C, [117], and; b. a US
military ship-board engine room intake thermometer, σ = ±1.1 C, [122]. The dashed
vertical line marks zero error.
Figure 5 shows
examples of error profiles in land-surface (5a) and SST (5b) temperature
measurements. Figure 5b typifies the error that infects the ship engineintake
data sets, [126] which contribute by far the greatest part of the 20th century
SST record.
The minimal uncertainty in an
individual land-surface temperature measurement coming from a standard
temperature sensor, even while operating under ideal field conditions, has been
evaluated as ±0.46 C. [125] Combining this ±0.46 C with the estimated ±0.2 C
uncertainty due to site inhomogeneities [115, 127], the root-meansquare
(r.m.s.) minimum uncertainty in the averaged global land-surface air
temperature is 1σ
= ±0.50 C.
The SST measurement error profile
shown in Figure 5b derived from a study employing twelve US military transport
ships engaged off the US central Pacific coast, that included 6826 pairs of
observations. The obviously skewed distribution of error in Figure 5b was
called, “a typical distribution of the differences” between the measured and
the true SST. The full data set over the entire fleet showed a mean bias error
of 0.7 C, and 1σmean
= ±0.9 C systematic measurement error.
Land and SST systematic measurement errors have never been
factored into the uncertainty reported for the GASAT record. The reason for
this neglect is that measurement error is invariably assumed to be random,
[115, 128-130] and that the Central Limit Theorem (CLT) applies carte blanche.
[126] In brief, the CLT says that N
the distribution of points taken from an overall
random process XN =xi will
i=1 approach normality at the large N limit no matter the shapes of the data
point distributions of the individual subsidiary processes, xi. [131] When an overall
error process is random, the error variance diminishes as σε2 /N, where N is the number of measurements entering an average. When N is very large, as in the compilation
of an annual global surface air temperature, the σε2
variance of sensor measurement error is reckoned to be negligible.
Variance reduction by appeal to the CLT is justified when
the overall distribution of error is known to be random. However, there is no a priori reason to expect that
systematic errors should be normally distributed at any N. [37, 132, 133] Further, the assumed global relevance of the CLT
to the systematic measurement error of temperature sensors has never been
empirically established. A recent comparison of ship SST measurements with
equivalent SSTs measured using the Advanced AlongTrack Scanning Satellite
microwave radiometer aboard the European Envisat produced skewed non-random
difference profiles. [126] Thus, neither the error profiles discussed here nor
others in the published literature support the assumption of random measurement
error. Invocation of the CLT to dismiss temperature sensor measurement error is
therefore unjustified on any grounds.
For the present discussion let the minimal 1σ = ±0.5 C land surface error
also represent the minimum of uncertainty due to the systematic error inherent
within the SST record. The typicality of the SST profile in Figure 5b ensures
that the average 20th century SST systematic measurement error
almost certainly exceeds ±0.5 C. More detailed analyses of these matters will
be reported elsewhere.
Figure
6. The 20th century GASAT record, HadCRUT3-gl. [112] The 1σ uncertainty bars extend across
±0.5 C. Thus, 2σ
= ±1 C = 1.25 ×
the entire 160-year increase.
A lower limit of
uncertainty due to systematic measurement error in the land plus SST GASAT
record is then, ±1σglobal =± 0.3×(0.5 C)2
+0.7×(0.5
C)2 =±0.5
C . Figure 6 shows the effect
of this uncertainty. The 20th century GASAT record is indistinguishable
from zero at the 95% confidence interval. Thus, it is not knowable whether
either the magnitude or the rate of air temperature warming since 1850 has been
in any way unusual.
3. CONCLUSION
With the recovery of ignored systematic
error in the GASAT record, it is found that scientific negligence has plagued
all of consensus climatology. For 25 years the field has misrepresented its
state of knowledge. Neither the scenarios produced using climate models, nor
the consensus paleo-temperatures purported from proxies, nor the GASAT record
can escape this judgment.
The following
conclusions are entrained by the foregoing:
1. The
poor resolution of present state-of-the-art CMIP5 GCMs means theresponse of the
terrestrial climate to increased GHGs is far below any level of detection.
2. The
poor resolution of CMIP5 GCMs means all past and present projections
ofterrestrial air temperature can have revealed nothing of future terrestrial
air temperature.
3. The
lack of any scientific content in consensus proxy paleo-temperaturereconstructions
means nothing has been revealed of terrestrial paleotemperatures.
4. The
neglected systematic sensor measurement error in the GASAT record meansthat
neither the rate nor the magnitude of the change in surface air temperatures is
knowable.
Therefore, 5: Detection and attribution of an anthropogenic
cause to climate change can not have been nor presently can be evidenced in
climate observables.
4. ACKNOWLEDGEMENTS
The author is grateful to Dr. Willie Soon,
to a climate physicist who prefers anonymity, and most especially to Dr.
Antonis Christofides, National Technical University of Athens for critically
appraising a prior version of this manuscript. The E&E reviewers are
thanked for their constructive criticisms. Prof. Demetris Koutsoyiannis,
National Technical University of Athens, is thanked for generously supplying
the data used for Figure 3.
For the entire text, graphs and references: http://multi-science.atypon.com/doi/abs/10.1260/0958-305X.26.3.391
No comments:
Post a Comment