Sunday, August 25, 2019

More reason to question the extent of anthropogenic climate change

Here is a link to a paper by Kauppinen and Malmi, "No Experimental Evidence For The Significant Anthropogenic Climate Change".

The failure of Climate Alarmists to contemplate the impact of the sun on climate is a fatal flaw.  This conclusion is shared by some of the World's top scientists working on climate change, e.g., Nir Shaviv and Henrik Svensmark.

Here are some excerpts from the paper.
-----------------------------------------
Abstract.

In this paper we will prove that GCM-models used in IPCC report AR5 fail to calculate the influences of the low cloud cover changes on the global temperature. That is why those models give a very small natural temperature change leaving a very large change for the contribution of the green house gases in the observed temperature. This is the reason why IPCC has to use a very large sensitivity to compensate a too small natural component. Further they have to leave out the strong negative feedback due to the clouds in order to magnify the sensitivity. In addition, this paper proves that the changes in the low cloud cover fraction practically control the global temperature.

The climate sensitivity has an extremely large uncertainty in the scientific literature. The smallest values estimated are very close to zero while the highest ones are even 9 degrees Celsius for a doubling of CO2. The majority of the papers are using theoretical general circulation models (GCM) for the estimation. These models give very big sensitivities with a very large uncertainty range. Typically sensitivity values are between 2–5 degrees. IPCC uses these papers to estimate the global temperature anomalies and the climate sensitivity. However, there are a lot of papers, where sensitivities lower than one degree are estimated without using GCM. The basic problem is still a missing experimental evidence of the climate sensitivity. One of the authors (JK) worked as an expert reviewer of IPCC AR5 report. One of his comments concerned the missing experimental evidence for the very large sensitivity presented in the report [1]. As a response to the comment IPCC claims that an observational evidence exists for example in Technical Summary of the report. In this paper we will study the case carefully.

2. Low cloud cover controls practically the global temperature

The basic task is to divide the observed global temperature anomaly into two parts: the natural component and the part due to the green house gases. In order to study the response we have to re-present Figure TS.12 from Technical Summary of IPCC AR5 report (1). This figure is Figure 1. Here we highlight the subfigure “Land and ocean surface” in Figure 1. Only the black curve is an observed temperature anomaly in that figure. The red and blue envelopes are computed using climate models. We do not consider computational results as experimental evidence. Especially the results obtained by climate models are questionable because the results are conflicting with each other.

In Figure 2 we see the observed global temperature anomaly (red) and global low cloud cover changes (blue). These experimental observations indicate that 1 % increase of the low cloud cover fraction decreases the temperature by 0.11°C. This number is in very good agreement with the theory given in the papers [3, 2, 4]. Using this result we are able to present the natural temperature anomaly by multiplying the changes of the low cloud cover by −0.11°C/%. This natural contribution (blue) is shown in Figure 3 superimposed on the observed temperature anomaly (red). As we can see there is no room for the contribution of greenhouse gases i.e. anthropogenic forcing within this experimental accuracy. Even though the monthly temperature anomaly is very noisy it is easy to notice a couple of decreasing periods in the increasing trend of the temperature. This behavior cannot be explained by the monotonically increasing concentration of CO2 and it seems to be far beyond the accuracy of the climate models.

The red curve in Figures 2 and 3 corresponds to the black curve, between years 1983 and 2008, in the above-mentioned subfigure “Land and ocean surface”. If the clouds and CO2 were taken into account correctly in the climate models both the blue and red envelopes should overlap the observed black curve. As we see the trend of the blue envelope is more like decreasing. We suggest this is due to a wrong or missing processing of the low cloud cover contribution. In the report AR5 it is even recognized that the low clouds give the largest uncertainty in computation. In spite of this IPCC still assumes that the difference between the blue and red envelopes in Figure 1 is the contribution of greenhouse gases. Unfortunately, the time interval (1983–2008) in Fig 2 is limited to 25 years because of the lack of the low cloud cover data. During this time period the CO2 concentration increased from 343 ppm to 386 ppm and both Figures 1 (IPCC) and 2 show the observed temperature increase of about 0.4°C. The actual global temperature change, when the concentration of CO2 raises from C0 to C, is

∆T =∆T2CO2 lnC/C0 ln2 −11°C·∆c,          (1)

where ∆T2CO2 is the global temperature change, when the CO2 concentration is doubled and ∆c is the change of the low cloud cover fraction. The first and second term are the contributions of CO2 [5] and the low clouds, respectively. Using the sensitivity ∆T2CO2 = 0.24°C derived in the papers [3, 2, 4] the contribution of greenhouse gases to the temperature is only about 0.04°C according to the first term in the above equation. This is the reason why we do not see this small increase in temperature in Figure 3, where the temperature anomaly is quite noisy with one month time resolution. It is clearly seen in Figure 2 that the red and blue anomalies are like mirror images. This means that the first term is much smaller than the absolute value of the second term (11°C·∆c) in equation (1).

It turns out that the changes in the relative humidity and in the low cloud cover depend on each other [4]. So, instead of low cloud cover we can use the changes of the relative humidity in order to derive the natural temperature anomaly. According to the observations 1 % increase of the relative humidity decreases the temperature by 0.15°C, and consequently the last term in the above equation can be approximated by −15°C∆φ, where ∆φ is the change of the relative humidity at the altitude of the low clouds.

Figure 4 shows the sum of the temperature changes due to the natural and CO2 contributions compared with the observed temperature anomaly. The natural component has been calculated using the changes of the relative humidity. Now we see that the natural forcing does not explain fully the observed temperature anomaly. So we have to add the contribution of CO2 (green line), because the timeinterval is now 40 years (1970–2010). The concentration of CO2 has now increased from 326 ppm to 389 ppm. The green line has been calculated using the sensitivity 0.24°C, which seems to be correct. In Fig. 4 we see clearly how well a change in the relative humidity can model the strong temperature minimum around the year 1975. This is impossible to interpret by CO2 concentration. 

The IPCC climate sensitivity is about one order of magnitude too high, because a strong negative feedback of the clouds is missing in climate models. If we pay attention to the fact that only a small part of the increased CO2 concentration is anthropogenic, we have to recognize that the anthropogenic climate change does not exist in practice. The major part of the extra CO2 is emitted from oceans [6], according to Henry‘s law. The low clouds practically control the global average temperature. During the last hundred years the temperature is increased about 0.1°C because of CO2. The human contribution was about 0.01°C.

3. Conclusion
We have proven that the GCM-models used in IPCC report AR5 cannot compute correctly the natural component included in the observed global temperature. The reason is that the models fail to derive the influences of low cloud cover fraction on the global temperature. A too small natural component results in a too large portion for the contribution of the greenhouse gases like carbon dioxide. That is why IPCC represents the climate sensitivity more than one order of magnitude larger than our sensitivity 0.24°C. Because the anthropogenic portion in the increased CO2 is less than 10 %, we have practically no anthropogenic climate change. The low clouds control mainly the global temperature.

Friday, August 23, 2019

Climate Change: What's the Worst Case?

Here is a blog entry from Judith Curry's blog.  JC is a recognized expert in the field.

JC is on target.

The message: The Alarmists are "Alarmists".  The climate change stories you usually hear about are not credible.

My prediction is that within ten years, the current "scientific consensus" you hear about will be proven to be a combination of bad physics and bad statistics.

JC's comments at the end of the  blog entry about the "peer" reactions she has experienced are telling.  Alarmist bias among climate change "scientists" is rampant.
-------------------------------------------
My new manuscript is now available.

A link to my new paper ‘Climate Change: What’s the Worst Case?’ is provided here [worst case paper final (1)]

A few words on the intended audience and motivation for writing this:

First and foremost, this is written for the clients of Climate Forecast Applications Network who are interested in scenarios of future climate change [link]

Second, this paper is written as a contribution to my series of academic papers on the topic of uncertainty in climate science:
Climate Science and the Uncertainty Monster
Reasoning About Climate Uncertainty
Nullifying the Climate Null Hypothesis
Climate Change: No Consensus on Consensus
Climate Uncertainty and Risk

Third, the paper is written to inform the public debate on climate change and policy makers. I am ever hopeful that some sanity can be interjected into all this.

This paper is particularly relevant in light on the preceding post on consensus, and Gavin’s desire for a better way to treat the extreme tails.

Overview of contents

I’m reproducing the Abstract, Introduction and Conclusions in this blog post, I encourage you to read the entire paper.

Abstract. The objective of this paper is to provide a broader framing for how we assess and reason about possible worst-case outcomes for 21st century climate change. A possibilistic approach is proposed as a framework for summarizing our knowledge about projections of 21st century climate outcomes. Different methods for generating and justifying scenarios of future outcomes are described. Consideration of atmospheric emissions/concentration scenarios, equilibrium climate sensitivity, and sea-level rise projections illustrate different types of constraints and uncertainties in assessing worst-case outcomes. A rationale is provided for distinguishing between the conceivable worst case, the possible worst case and the plausible worst case, each of which plays different roles in scientific research versus risk management.

1.Introduction

The concern over climate change is not so much about the warming that has occurred over the past century. Rather, the concern is about projections of 21st century climate change based on climate model simulations of human-caused global warming, particularly those driven by the RCP8.5 greenhouse gas concentration scenario.

The Intergovernmental Panel on Climate Change (IPCC) Assessment Reports have focused on assessing a likely range (>66% probability) for projections in response to different emissions concentration pathways. Oppenheimer et al. (2007) contends that the emphasis on consensus in IPCC reports has been on expected outcomes, which then become anchored via numerical estimates in the minds of policy makers. Thus, the tails of the distribution of climate impacts, where experts may disagree on likelihood or where understanding is limited, are often understated in the assessment process, and then exaggerated in public discourse on climate change.

In an influential paper, Weitzman (2009) argued that climate policy should be directed at reducing the risks of worst-case outcomes, not at balancing the most likely values of costs and benefits. Ackerman (2017) has argued that policy should be based on the credible worst-case outcome. Worst-case scenarios of 21st century sea level rise are becoming anchored as outcomes that are driving local adaptation plans (e.g. Katsman et al. 2011). Projections of future extreme weather/climate events driven by the worst-case RCP8.5 scenario are highly influential in the public discourse on climate change (e.g. Wallace-Wells, 2019).

The risk management literature has discussed the need for a broad range of scenarios of future climate outcomes (e.g., Trutnevyte et al. 2016). Reporting the full range of plausible and possible outcomes, even if unlikely, controversial or poorly understood, is essential for scientific assessments for policy making. The challenge is to articulate an appropriately broad range of future scenarios, including worst-case scenarios, while rejecting impossible scenarios.

How to rationally make judgments about the plausibility of extreme scenarios and outcomes remains a topic that has received too little attention. Are all of the ‘worst-case’ climate outcomes described in assessment reports, journal publications and the media, actually plausible? Are some of these outcomes impossible? On the other hand, are there unexplored worst-case scenarios that we have missed, that could turn out to be real outcomes? Are there too many unknowns for us to have confidence that we have credibly identified the worst case? What threshold of plausibility or credibility should be used when assessing these extreme scenarios for policy making and risk management?

This paper explores these questions by integrating climate science with perspectives from the philosophy of science and risk management. The objective is to provide a broader framing of the 21st century climate change problem in context of how we assess and reason about worst-case climate outcomes. A possibilistic framework is articulated for organizing our knowledge about 21st century projections, including how we extend partial positions in identifying plausible worst-case scenarios of 21st climate change. Consideration of atmospheric emissions/concentration scenarios, equilibrium climate sensitivity, and sea-level rise illustrate different types of constraints and uncertainties in assessing worst-case outcomes. This approach provides a rationale for distinguishing between the conceivable worst case, the possible worst case and the plausible worst case, each of which plays different roles in scientific research versus risk management.

2. Possibilistic framework

3. Scenarios of future outcomes

3.1 Scenario justification

3.2 Worst-case classification

3.3 Alternative scenarios

4. Is RCP8.5 plausible?

5. Climate sensitivity

6. Sea level rise

6.1 Worst-case scenarios

6.2 Possibility distribution

6.3 Alternative scenarios

7. Conclusions

The purpose of generating scenarios of future outcomes is that we should not be too surprised when the future eventually arrives. Projections of 21st century climate change and sea level rise are associated with deep uncertainty and rapidly advancing knowledge frontiers. The objective of this paper has been to articulate a strategy for portraying scientific understanding of the full range of possible scenarios of 21st century climate change and sea level rise in context of a rapidly expanding knowledge base, with a focus on worst-case scenarios.

A classification of future scenarios is presented, based on relative immunity to rejection relative to our current background knowledge and assessments of the knowledge frontier. The logic of partial positions allows for clarifying what we actually know with confidence, versus what is more speculative and uncertain or impossible. To avoid the Alice in Wonderland syndrome of scenarios that include too many implausible assumptions, published worst-case scenarios are assessed using the plausibility criterion of including only one borderline implausible assumption (where experts disagree on plausibility).

The possibilistic framework presented here provides a more nuanced way for articulating our foreknowledge than either by attempting, on the one hand, to construct probabilities of future outcomes, or on the other hand simply by labeling some statements about the future as possible. The possibilistic classification also avoids ignoring scenarios or classifying them as extremely unlikely if they are driven by processes that are poorly understood or not easily quantified.

The concepts of the possibility distribution, worst-case scenarios and partial positions are relevant to decision making under deep uncertainty (e.g. Walker et al. 2016), where precautionary and robust approaches are appropriate. Consideration of worst-case scenarios is an essential feature of precaution. A robust policy is defined as yielding outcomes that are deemed to be satisfactory across a wide range of plausible future outcomes. Robust policy making interfaces well with possibilistic approaches that generate a range of possible futures (e.g. Lempert et al. 2012). Partial positions are of relevance to flexible defense measures in the face of deep uncertainty in future projections (e.g. Oppenheimer and Alley, 2017).

Returning to Ackerman’s (2017) argument that policy should be based on the credible worst-case outcome, the issue then becomes how to judge what is ‘credible.’ It has been argued here that a useful criterion for a plausible (credible) worst-case climate outcome is that at most one borderline implausible assumption – defined as an assumption where experts disagree as to whether or not it is plausible – is included in developing the scenario. Using this criterion, the following summarizes my assessment of the plausible (credible) worst-case climate outcomes, based upon our current background knowledge: 
  • The largest rates of warming that are often cited in impact assessment analyses (e.g. 4.5 or 5 oC) rely on climate models being driven by a borderline implausible concentration/emission scenarios (RCP8.5). 
  • The IPCC AR5 (2013) likely range of warming at the end of the 21st century has a top-range value of 3.1 oC, if the RCP8.5-derived values are eliminated. Even the more moderate amount of warming of 3.1oC relies on climate models with values of the equilibrium climate sensitivity that are larger than can be defended based on analysis of historical climate change. Further, these rates of warming explicitly assume that the climate of the 21st century will be driven solely by anthropogenic changes to the atmospheric concentration, neglecting 21st century variations in the sun and solar indirect effects, volcanic eruptions, and multi-decadal to millennial scale ocean oscillations. Natural processes have the potential to counteract or amplify the impacts of any manmade warming. 
  • Estimates of 21st century sea level rise exceeding 1 m require at least one borderline implausible or very weakly justified assumption. Allowing for one borderline implausible assumption in the sea level rise projection produces high-end estimates of sea level rise of 1.1 to 1.6 m. Higher estimates are produced using multiple borderline implausible or very weakly justified assumptions. The most extreme of the published worst-case scenarios require a cascade of events, each of which are extremely unlikely to borderline impossible based on our current knowledge base. However, given the substantial uncertainties and unknowns surrounding ice sheet dynamics, these scenarios should not be rejected as impossible.
The approach presented here is very different from the practice of the IPCC assessments and their focus on determining a likely range driven by human-caused warming. In climate science there has been a tension between the drive towards consensus to support policy making versus exploratory speculation and research that pushes forward the knowledge frontier (e.g. Curry and Webster, 2013). The possibility analysis presented here integrates both approaches by providing a useful framework for integrating expert speculation and model simulations with more firmly established theory and observations. This approach demonstrates a way of stratifying the current knowledge base that is consistent with deep uncertainty, disagreement among experts and a rapidly evolving knowledge base. Consideration of a more extensive range of future scenarios of climate outcomes can stimulate climate research as well as provide a better foundation for robust decision making under conditions of deep uncertainty.

Publication status

Since I resigned my faculty position, there has been little motivation for me to publish in peer reviewed journals. And I don’t miss the little ‘games’ of the peer review process, not to mention the hostility and nastiness of editors and reviewers who have an agenda.

However, one of my clients wants me to publish more journal articles. This client particularly encouraged me to publish something related to my Special Report on Sea Level and Climate Change. I submitted a shorter version of this paper, in a more academic style, for publication in a climate journal. It was rejected. Here is my ‘favorite’ comment from one of the reviewers:

“Overall, there is the danger that the paper is used by unscrupulous people to create confusion or to discredit climate or sea-level science. Hence, I suggest that the author reconsiders the essence of its contribution to the scientific debate on climate and sea-level science.”

You get the picture. I can certainly get some version of this published somewhere, but this review reminded me why I shouldn’t bother with official ‘peer review.’ Publishing my research on Climate Etc. and as Reports ‘published’ by my company allows me to write my papers in a longer format, including as many references as I want. I can also ‘editorialize’ as I deem appropriate. In summary, I can write what I want, without worrying about the norms and agendas of the ‘establishment.’ Most of my readers want to read MY judgments, rather than something I think I can get past ‘peer reviewers.’

This particular paper is titled as a ‘Working Paper’, in the tradition often used by economists and legal scholars in issuing their reports. It is publicly available for discussion, and I can revise it when appropriate. I hope it will stimulate people to actually think about these issues and discuss them. I look forward to a lively review of this paper.

And finally, it is difficult to see how this paper could be categorized at ‘contrarian.’ It is not even ‘lukewarm.’ It discusses worst-case scenarios, and how to think about their plausibility. In fact, in one of the threads at WUWT discussing one of my previous ‘worst-case’ posts, commenters thought that this was way too ‘alarmist’ to be posted at WUWT.

Bottom line: we need to think harder and differently about climate change. This paper helps provide a framework for stepping beyond the little box that we are currently caught in.

Thursday, August 22, 2019

A further decline in our freedom

Here is Judge Andrew Napolitano "More Spying and Lying"

JAN is on target.

We are no longer a free country and it is getting less free.
----------------------------------------------------
While most of us have been thinking about the end of summer and while the political class frets over the Democratic presidential debates and the aborted visit of two members of Congress to Israel, the Trump administration has quietly moved to extend and make permanent the government's authority to spy on all persons in America.

The president, never at a loss for words, must have been asked by the intelligence community he once reviled not to address these matters in public.

These matters include the very means and the very secret court about which he complained loud and long during the Mueller investigation. Now, he wants to be able to unleash permanently on all of us the evils he claims were visited upon him by the Obama-era FBI and by his own FBI. What's going on?

Here is the backstory.

After the lawlessness of Watergate had been exposed — a president spying on his political adversaries without warrants in the name of national security — Congress enacted in 1978 the Foreign Intelligence Surveillance Act. It prescribed a means for surveillance other than that which the Constitution requires.

The Fourth Amendment to the Constitution — written in the aftermath of British soldiers and agents using general warrants obtained from a secret court in London to spy on whomever in the colonies they wished and to seize whatever they found — was ratified as part of the Bill of Rights to limit the government's ability to intrude upon the privacy of all persons, thereby prohibiting those procedures used by the British.

Thus, we have the constitutional requirements that no searches and seizures can occur without a warrant issued by a judge based on a showing, under oath, of probable cause of crime. The courts have uniformly characterized electronic surveillance as a search.

I am not addressing eyesight surveillance on a public street. I am addressing electronic surveillance wherever one is when one sends or receives digital communications. FISA is an unconstitutional congressional effort to lower the standards required by the Fourth Amendment from probable cause of crime to probable cause of foreign agency.

Can Congress do that? Can it change a provision of the Constitution? Of course not. If it could, we wouldn't have a Constitution.

It gets worse.

The court established by FISA — that's the same court that President Donald Trump asserts authorized spying on him in 2015 and 2016 — has morphed the requirement of probable cause of being a foreign agent to probable cause of communicating with a foreign person as the standard for authorizing surveillance.

What was initially aimed at foreign agents physically present in the United States has secretly become a means to spy on innocent Americans. In Trump's case, the FISA court used the foreign and irrelevant communications of two part-time campaign workers to justify surveillance on the campaign.

Add to all this the 2002 secret order of President George W. Bush directing the National Security Agency to spy on all in America all the time without warrants — this is what Edward Snowden exposed in 2013 — and one can see what has happened.

What happened?

What happened was the creation of a surveillance state in America that came about by secret court rulings and a once-secret presidential order. As a result of this, part of the government goes to the secret FISA court and obtains search warrants on flimsy and unconstitutional grounds and part of the government bypasses FISA altogether and spies on everyone in America and denies it and lies about it.

Bill Binney, the genius mathematician who once worked for the NSA and now is its harshest critic, has stated many times that, as unconstitutional as FISA is, it is a pretext to NSA spying on all persons in America all the time.

How pervasive is this unlawful spying? According to Binney, the NSA's 60,000 domestic spies capture the content and the keystrokes of every communication transmitted on fiber optic cables into or out of or wholly within the United States. And they do so 24/7 — without warrants.

Now, back to that quiet late summer proposal by the Trump administration. Some of the statutes that govern who can go to the FISA court and under what circumstances they can go are about to expire. Inexplicably, the president once victimized by FISA wants to make these statutes permanent. And he wants to do so knowing that they are essentially a facade for spying. That would institutionalize the now decades-long federal assault on privacy and evasion of constitutional norms.

It would also place Trump in the same category as his two immediate predecessors, who regularly ordered government agents to violate the Fourth Amendment and then denied they had done so.

Some of my Fox colleagues joke with me that I am shoveling against the tide when it comes to defending the right to privacy. They claim that there is no more privacy. I disagree with them. As long as we still have a Constitution, it must be taken seriously and must mean what it says. And its intentionally stringent requirements for enabling the government to invade privacy remain the law of the land. The president has sworn to uphold the Constitution, not the NSA.

The late Supreme Court Justice George Sutherland once wrote that we cannot pick and choose which parts of the Constitution to follow and which to ignore. If we could, the Constitution would be meaningless.

Did he foresee our present woes when he wrote, "If the provisions of the Constitution be not upheld when they pinch as well as when they comfort, they may as well be abandoned"?

Is that where we are headed?

Saturday, August 17, 2019

Climate and cosmic rays

Here is a very interesting partly biographical article by Nir Shaviv titled "How Might Climate be Influenced by Cosmic Rays".

This influence has been virtually totally ignored by the vast majority of climate scientists - no wonder the climate models quoted so often don't work well.

Nir Shaviv, IBM Einstein Fellow and Member in the School of Natural Sciences, is focusing on cosmic ray diffusion in the dynamic galaxy, the solar cosmic ray–­climate link, and the appearance of extremely luminous (super-Eddington) states in stellar evolution during his stay at the Institute of Advanced Study at Princeton. Shaviv is Professor at the Racah Institute of Physics at the Hebrew University of Jerusalem.
---------------------------------------------------
In 1913, Victor Hess measured the background level of atmospheric ionization while ascending with a balloon. By doing so, he discovered that Earth is continuously bathed in ionizing radiation. These cosmic rays primarily consist of protons and heavier nuclei with energies between their rest mass and a trillion times larger. In 1934, Walter Baade and Fritz Zwicky suggested that cosmic rays originate from supernovae, the explosive death of massive stars. However, only in 2013 was it directly proved, using gamma-ray observations with the FERMI satellite, that cosmic rays are indeed accelerated by supernova remnants. Thus, the amount of ionization in the lower atmosphere is almost entirely governed by supernova explosions that took place in the solar system’s galactic neighborhood in the past twenty million years or so.

Besides being messengers from ancient explosions, cosmic rays are extremely interesting because they link together so many different phenomena. They tell us about the galactic geography, about the history of meteorites or of solar activity, they can potentially tell us about the existence of dark matter, and apparently they can even affect climate here on Earth. They can explain many of the past climate variations, which in turn can be used to study the Milky Way.

The idea that cosmic rays may affect climate through modulation of the cosmic ray ionization in the atmosphere goes back to Edward Ney in 1959. It was known that solar wind modulates the flux of cosmic rays reaching Earth—a high solar activity deflects more of the cosmic rays reaching the inner solar system, and with it reduces the atmospheric ionization. Ney raised the idea that this ionization could have some climatic effect. This would immediately link solar activity with climate variations, and explain things like the little ice age during the Maunder minimum, when sunspots were a rare occurrence on the solar surface.

In the 1990s, Henrik Svensmark from Copenhagen brought the first empirical evidence of this link in the form of a correlation between cloud cover and the cosmic ray flux variations over the solar cycle. This link was later supported with further evidence including climate correlations with cosmic ray flux variations that are independent of solar activity, as I describe below, and, more recently, with laboratory experiments showing how ions play a role in the nucleation of small aerosols and their growth to larger ones.

In 2000, I was asked by a German colleague about possible effects that supernovae could have on life on Earth. After researching a bit, I stumbled on Svensmark’s results and realized that the solar system’s galactic environment should be changing on time scales of tens of millions of years. If cosmic rays affect the terrestrial climate, we should see a clear signature of the galactic spiral arm passages in the paleoclimatic data, through which we pass every 150 million years. This is because spiral arms are the regions where most supernovae take place in our galaxy. Little did I know, it would take me on a still ongoing field trip to the Milky Way.

The main evidence linking the galactic environment and climate on Earth is the exposure ages of iron meteorites. Exposure ages of meteorites are the inferred duration between their breakup from their parent bodies and their penetration into Earth’s atmosphere. They are obtained by measuring the radioactive and stable isotopes accumulated through interaction with the cosmic rays perfusing the solar system. It turns out that if one looks at exposure ages a bit differently than previously done, by assuming that meteorites form at a statistically constant rate while the cosmic ray flux can vary, as opposed to the opposite, then the cosmic ray flux history can be reconstructed. It exhibits seven clear cycles, which coincide with the seven periods of ice-age epochs that took place over the past billion years. On longer time scales, it is possible to reconstruct the overall cosmic ray flux variations from a changed star formation rate in the Milky Way, though less reliably. The variable star formation rate can explain why ice-age epochs existed over the past billion years and between one and two billion years ago, but not in other eons.

I later joined forces with Canadian geochemist Ján Veizer who had the best geochemical reconstruction of the temperature over the past half billion years, during which multicellular life left fossils for his group to dig and measure. His original goal was to fingerprint the role of CO2 over geological time scales, but no correlation with the paleotemperature was apparent. On the other hand, his temperature reconstruction fit the cosmic ray reconstruction like a glove. When we published these results, we instantly became personae non gratae in certain communities, not because we offered a data-supported explanation to the long-term climate variations, but because we dared say that CO2 can at most have a modest effect on the global temperature.

Besides the spiral arm passages, our galactic motion should give rise to a faster cosmic ray flux modulation—in addition to the solar system’s orbit around the galaxy, with roughly a 250-million-year period, the solar system also oscillates perpendicular to the galactic plane. Since the cosmic ray density is higher at the plane, it should be colder every time the solar system crosses it, which depending on the exact amount of mass in the disk should be every 30 to 40 million years.

A decade ago, the geochemical climate record showed hints of a 32-million-year periodicity, with peak cooling taking place a few million years ago, as expected from the last plane passage. Together with Veizer and a third colleague, Andreas Prokoph, we then submitted a first version for publication. However, we actually ended up putting the paper aside for almost a decade because of two nagging inconsistencies.

First, analysis of the best database of the kinematics of nearby stars, that of the Hipparcos satellite, pointed to a low density at the galactic plane, which in turn implied a longer period for the plane crossings, around once every 40 million years. Second, it was widely accepted in the cosmic ray community that cosmic rays should be diffusing around the galactic disk in a halo that is much larger than the stellar disk itself. This would imply that the 300 light years that the solar system ventures away from the galactic plane could not explain the 1 to 2°C variations implied for the geochemical record. Without a way to reconcile these, there was not much we could do. Perhaps the 32 million years was just a random artifact.

As time progressed, however, the improved geochemical record only showed that the 32-million-year signal became more prominent. In fact, fifteen cycles could now be clearly seen in the data. But something else also happened. My colleagues and I began to systematically study cosmic ray diffusion in the Milky Way while alleviating the standard assumption that everyone had until then assumed—that the sources are distributed symmetrically around the galaxy. To our surprise, it did much more than just explain the meteoritic measurements of a variable cosmic ray flux. It provided an explanation to the so-called Pamela anomaly, a cosmic ray positron excess that was argued by many to be the telltale signature of dark matter decay. It also explained the behavior of secondary cosmic rays produced along the way. But in order for the results to be consistent with the range of observations, the cosmic ray diffusion model had to include a smaller halo, one that more closely resembles the disk. In such a halo, the vertical oscillation of the solar system should have left an imprint in the geochemical record not unlike the one detected.

Thus, armed with the smaller halo and a more prominent paleoclimate signal, we decided to clear the dust off the old paper. The first surprise came when studying the up-to-date data. It revealed that the 32-million-year signal also has a secondary frequency modulation, that is, it is periodically either slower or longer. This modulation has a period and phase corresponding to the radial oscillations that the solar system exhibits while revolving around the galaxy. When it is closer to the galactic center, the higher density at the galactic plane forces it to oscillate faster, while when far from the center, the density is lower and the oscillation period is longer.

The second surprise came when studying the stellar kinematics from the astrometric data. We found that the previous analysis, which appeared to have been inconsistent, relied on the assumption that the stars are more kinematically relaxed then they are. As a consequence, there was a large unaccounted systematic error—without it there was no real inconsistency. It took almost a decade, but things finally fell into place.

The results have two particularly interesting implications. First, they bring yet another link between the galactic environment and the terrestrial climate. Although there is no direct evidence that cosmic rays are the actual link on the 32-million-year time scale, as far as we know, they are the only link that can explain these observations. This in turn strengthens the idea that cosmic ray variations through solar activity affect the climate. In this picture, solar activity increase is responsible for about half of the twentieth-century global warming through a reduction of the cosmic ray flux, leaving less to be explained by anthropogenic activity. Also, in this picture, climate sensitivity is on the low side (perhaps 1 to 1.5°C increase per CO2 doubling, compared with the 1.5 to 4.5°C range advocated by the IPCC), implying that the future is not as dire as often prophesied.

The second interesting implication is the actual value of the 32-million-year oscillation. The relatively short period indicates that there is more mass in the galactic plane than accounted for in stars and interstellar gas, leaving the remainder as dark matter. However, this amount of dark matter is more than would be expected if it were distributed sparsely in a puffed-up halo as is generally expected. In other words, this excess mass requires at least some of the dark matter to condense into the disk. If correct, it will close a circle that started in the 1960s when Edward Hill and Jan Oort suggested, based on kinematic evidence, that there is more matter at the plane than observed. This inconsistency and indirect evidence for dark matter was also advocated by John Bahcall, who for many years was a Faculty member here at the IAS.

It should be noted that the idea that cosmic rays affect the climate is by no means generally accepted. The link is contentious and it has attracted significant opponents over the years because of its ramifications to our understanding of recent and future climate change. For it to be finally accepted, one has to understand all the microphysics and chemistry associated with it. For this reason, we are now carrying out a lab experiment to pinpoint the mechanism responsible for linking atmospheric ions and cloud condensation nuclei. This should solidify a complete theory to explain the empirical evidence.

As for the existence of more dark matter in the galactic plane than naively expected, we will not have to wait long for it to be corroborated (or refuted). The GAIA astrometric satellite mapping the kinematics of stars to unprecedented accuracy will allow for a much better measurement of the density at the plane. The first release of data is expected to be in 2016, just around the corner.

The truth about climate change

For those of you who are interested in the truth about the state of climate change science, here is a link to a 22 minute talk by Nir Shaviv.

This talk is EXCELLENT.

Professor Nir Joseph Shaviv is an Israeli‐American physics professor. He is professor at the Racah Institute of Physics of the Hebrew University of Jerusalem, of which he is now its chairman.

Shaviv started taking courses at the Israel Institute of Technology in Haifa at age 13. He graduated with a BA in physics in 1990, and finished as best in class. During his military service (1990–93) he continued his studies and co-authored his first papers in astrophysics. In 1994 he received a Master of Science in physics and a doctorate during 1994–96. During 1996–99 he was a Lee DuBridge Prize Fellow at Caltech's TAPIR (Theoretical Astrophysics) group. During 1999–2001 he was in a postdoctorate position at the Canadian Institute for Theoretical Astrophysics. In 2001–6 he was a senior lecturer at Racah Institute of physics at the Hebrew University of Jerusalem. In 2006-2012 he was an associate professor, and full professor since 2012. Between 2008 and 2011 he was the head of the faculty union of the Hebrew University, and he served as the chairman of coordinating council of faculty unions between 2010 and 2014. In 2014 he became a member of the Institute for Advanced Study in Princeton, and chairman of The Racah Institute of Physics in 2015.

I have provided NS's background so that you will be able to differentiate him from the ninnies you usually hear from about climate change.

Thursday, August 15, 2019

Red Flag laws are an unconstitutional knee-jerk reaction that reduce everyone’s freedom.

Don't take my word for it - here is what Judge Andrew Napolitano thinks.
-----------------------------------------------
When tragedy strikes, as it did in two mass killings earlier this month, there is always the urge to pressure the government do something. Governments are animated by the belief that doing something — any demonstrable overt behavior — will show that they are in control. I understand the natural fears that good folks have that an El Paso or a Dayton episode might happen again, but doing something for the sake of appearance can be dangerous to personal liberty.

When the Constitution was written, the idea of owning arms and keeping them in the home was widespread. The colonists had just defeated the armies of King George III. The colonial weapon of choice was the Kentucky long rifle, while British soldiers used their army-issued version of Brown Bessies. Each rifle had its advantages, but the Kentucky (it was actually a German design, perfected and manufactured in Pennsylvania) was able to strike a British soldier at 200 yards, a startlingly long distance at the time. The Bessies were good for only about 80 yards.

Put aside the advantages we had of the passionate defense of freedom and homeland, to say nothing of superior leadership, it doesn't take any advanced understanding of mathematics or ballistics to appreciate why we won the Revolution.

It is a matter of historical fact that the colonists won the war largely by superior firepower.

Six years after the war was over, delegates met in Philadelphia in secret and drafted what was to become the Constitution. The document, largely written in James Madison's hand, was then submitted to Congress and to the states, which began the process of ratification.

By then, Americans had already formed two basic political parties. The Federalists wanted a muscular central government and the Anti-Federalists wanted a loose confederation of states. Yet the memory of a Parliament that behaved as if it could write any law, tax any event and impair any liberty, coupled with the fear that the new government here might drift toward tyranny, gave birth to the first 10 amendments to the Constitution — the Bill of Rights.

The debate over the Bill of Rights was not about rights; that debate had been resolved in 1776 when the Declaration of Independence declared our basic human rights to be inalienable. The Bill of Rights debates were about whether the federal government needed restraints imposed upon it in the Constitution itself.

The Federalists thought the Bill of Rights was superfluous because they argued that no American government would knowingly restrict freedom. The Anti-Federalists thought constitutional restraints were vital to the preservation of personal liberty because no government can be trusted to preserve personal liberty.

Second among the personal liberties preserved in the Bill of Rights from impairment by the government was the right to self-defense. Thomas Jefferson called that the right to self-preservation.

Fast-forward to today, and we see the widespread and decidedly un-American reaction to the tragedies of El Paso, Texas, and Dayton, Ohio. Even though both mass murders were animated by hatred and planned by madness, because both were carried out using weapons that look like those issued by the military, Democrats have called for the outright confiscation of these weapons.

Where is the constitutional authority for that? In a word: nowhere.

The government's job is to preserve personal liberty. Does it do its job when it weakens personal liberty instead? Stated differently, how does confiscating weapons from the law-abiding conceivably reduce their access to madmen? When did madmen begin obeying gun laws?

These arguments against confiscation have largely resonated with Republicans. Yet — because they feel they must do something — they have fallen for the concept of limited confiscation, known by the euphemism of "red flag" laws.

The concept of a "red flag" law — which permits the confiscation of lawfully owned weapons from a person because of what the person might do — violates both the presumption of innocence and the due process requirement of proof of criminal behavior before liberty can be infringed.

The presumption of innocence puts the burden for proving a case on the government. Because the case to be proven — might the gun owner be dangerous? — if proven, will result in the loss of a fundamental liberty, the presumption of innocence also mandates that the case be proven beyond a reasonable doubt.

The Republican proposal lowers the standard of proof to a preponderance of the evidence — "a more likely than not" standard. That was done because it is impossible to prove beyond a reasonable doubt that an event might happen. This is exactly why the might happen standard is unconstitutional and alien to our jurisprudence.

In 2008, Justice Antonin Scalia wrote for the Supreme Court that the right to keep and bear arms in the home is an individual pre-political right. Due process demands that this level of right — we are not talking about the privilege of a driving a car on a government street — can only be taken away after a jury conviction or a guilty plea to a felony.

The "might happen" standard of "red flag" laws violates this basic principle. The same Supreme Court case also reflects the Kentucky long gun lesson. The people are entitled to own and possess the same arms as the government; for the same reason as the colonists did — to fight off tyrants should they seize liberty or property.

If the government can impair Second Amendment-protected liberties on the basis of what a person might do, as opposed to what a person actually did do, to show that it is doing something in response to a public clamor, then no liberty in America is safe.

Which liberty will the government infringe upon next?

China vs. Trump

While the following is somewhat simplified and ignores some relevant consequences, it provides useful perspective.

China has made an unforced error that makes it possible for the US to gain at China’s expense. Trump can win if he plays it right.

First, Trump put tariffs on US imports from China. Downward sloping demand curves and upward sloping supply curves imply that:
  •  Prices to US consumers of Chinese products rise, but not nearly by the amount of the percentage tariff.
  •  US imports of Chinese products decline.
Suppose the US and China do nothing more. Then over time:
  • Other countries with production costs almost as low as China’s would sell to US consumers at prices only slightly above the US pre-tariff prices of Chinese products.
  • Assuming the other countries’ production rates did not increase, Chinese manufacturers would sell their products to the other countries’ consumers to make up the difference.
The result, over time, would be:
  • US imports of Chinese products would decline substantially.
  • US imports of these products from other countries would rise substantially.
  • US consumer prices of the products involved probably would decline and approach their pre-tariff levels.
  • Tariffs collected by the US would decline substantially.
  • The distribution of production of the products involved across the various countries would not change much.
  • Neither the US nor China would lose or gain much.
But China responded by devaluing its currency relative to the US dollar. Assume the devaluation percentage is the same as the US tariff percentage. Then China has given a free lunch to the US. To see this, consider the simplistic case where a US tariff increases the US consumer price by the percentage tariff and imports of Chinese products is unaffected.

Before China devalues its currency:
  • The price of Chinese products to US consumers has risen by the tariff percentage.
  • US consumers are paying the tariff to the US government and the pre-tariff price to China.
  • The US government is collecting a substantial tariff.
Next:
  • China devalues its currency with respect to the dollar by the tariff percentage.
  • China’s pre-tariff price to US consumers drops by the tariff percentage.
  • US consumers pay the tariff to the US government and the new lower pre-tariff price to China.
  • US consumers net price drops to the original pre-tariff price.
  • Tariffs collected by the US remain substantial.
  • China has subsidized the US government by the amount of the tariffs collected.
  • US consumer prices for Chinese products have not changed.
  • The US is better off and China is worse off.

Monday, August 12, 2019

Aviation Magic

Consider an aircraft flying a clockwise circle at 100 knots airspeed at a constant altitude within an air mass that has a wind speed relative to the ground of 25 knots from 270 degrees. The pilot observes, relative to the air, that nothing changes except the aircraft’s direction of travel.

An object’s kinetic energy is the product of ½ its mass and its squared speed. Kinetic energy is proportional to squared speed.

From the pilot’s perspective, the aircraft’s speed is constant at 100 knots, hence its kinetic energy does not change and is proportional to 100*100 = 10,000.

From a ground observer’s perspective, the aircraft’s speed is 125 knots when its heading is 90 degrees and 75 knots when its heading is 270 degrees. He figures the aircraft’s kinetic energy is proportional to 125*125 = 15,625 on a heading of 90 degrees and 75*75 = 5,625 on a heading of 270 degrees. According to him, the aircraft’s kinetic energy drops by (5625-15625)/15625 = 64% during the turn from 90 degrees to 270 degrees. 64% of the aircraft’s kinetic energy has vanished – where did it go? The ground observer also sees the aircraft’s kinetic energy increase by (15625-5625)/5625 = 178% during the turn from 270 degrees to 90 degrees. Where did this kinetic energy come from?

For every full circle of the aircraft from 90 degrees to 270 degrees and then from 270 degrees to 90 degrees, the ground observer sees the aircraft’s percentage kinetic energy change by -64% followed by +178%. This is an average change in the aircraft’s kinetic energy of (-64+178)/2 = +57%. This corresponds to a net gain of 57% in the aircraft’s squared speed, e.g., from 100*100 to 100*100+0.57*100*100 = 1.57*100*100 = (1.25*100)*(1.25*100), or a gain of 25% in its speed.[1] For example, from 100 knots to 125 knots on the first circle. Evidently, all it takes to achieve high ground speeds when there is a wind is a few circles before setting out on your desired heading.

[1] A gain of 57% is an increase by a factor of (1+0.57) = 1.57. The square root of 1.57 is 1.25.

Tuesday, August 06, 2019

How does China’s stopping imports of US farm products effect US farmers?

If you believe the media and politicians, US farmers lose a dollar of sales for every dollar less of US farm exports to China.

If China reduces its imports of US farm products, the most likely consequence will be its importing of more farm products from other countries to compensate. But that will create a new market for US farm products in the other countries to compensate for the increased exports to China.

The most likely impact, over time, of China reducing its imports of US farm products by a dollar is a loss a dollar of US farm sales to China and an increase of about a dollar of US farm sales to other countries.

If the media and politicians can't get something as simple as this right, imagine how right they are likely to be about their assessment of the implications of their economic proposals.

Sunday, August 04, 2019

Prescription drug prices - not as simple as portrayed by the politicians

Here is a link to a paper by Frank, Hicks, and Berndt titled "The Price to Consumers of Generic Pharmaceuticals: https://www.nber.org/papers/w26120.pdfBeyond the Headlines".

The prices of generics declined substantially from 2008 to 2016.  This is to be expected in a competitive market.

The media and politicians focus on the increasing prices of non-generic pharmaceuticals.  Part of the reason for the high prices of new drugs include excessive Government regulation, the liability risk of greedy lawyers, and the incentives for many medical players to push new drugs vs. old ones - regardless of a clear cost benefit advantage.

If you don't like the high cost of a new drug that might benefit you, why not opt for an older drug available as a generic that may be just as good or almost as good?  Do  you really need Repatha?  why not atorvastatin?  Do you really need Lisinopril?  Why not Telmisartan?

Many "solutions" to the "high cost of drugs" proposed by the media and politicians would reduce the rate of innovation.  That will cause substantial excess deaths going forward.  On the other hand, John Cochrane's blog entries provide good ideas about how to reduce drug costs and health care costs generally.

Here are some excerpts from the paper.
--------------------------------------------------
Context: Generic drug prices have received a great deal of attention in the past few years.
Congressional committees, executive agencies and private organizations have all
conducted investigations into the pricing patterns for generic drugs. Price spikes for
several specific generic drugs have also been widely reported in the media.
Methods: We construct two Laspeyres chained price indexes that capture prices of
generic prescription drugs paid by consumers and private health plans. The first reflects
direct out of pocket payments made by the consumer to a pharmacy for dispensing a
generic prescription drug (“direct out-of-pocket CPI”, and the second the total price
received by the pharmacy (“total CPI”) comprised of the direct out-of-pocket payment
from the consumer plus the price paid to the dispensing pharmacy by the insurer on
behalf of the consumer. 

Findings: The chained direct out-of-pocket CPI we construct shows a roughly 50%
decline for generic prescription drugs between 2007 and 2016. In addition, between 2007
and 2016 the total CPI for generic prescription drugs fell by nearly 80%.
Conclusions: The U.S. generic prescription pharmaceutical market continues to drive
overall prices downward, although pharmacy price declines are not fully passed through
to consumers. Our evidence suggests that overall affordability is not the main problem in
the generic drug market.




The eye-catching increases in certain generic drug products have drawn attention
to generic drugs as a potential source of the most recent rapid rise in spending on
prescription drugs. Even though the 1,000% increases and more for drugs that have long
lost patent protection raise important and legitimate concerns about how various
segments of the industry are functioning, as has evidence of increasing consolidation in
the prescription pharmaceuticals market, the broader data on the overall behavior of
prices in the U.S. generic prescription pharmaceutical industry paints a different picture.

The U.S. generic prescription pharmaceutical market continues to drive overall prices
downward. Thus, our evidence suggests that overall affordability is not the main issue in
the generic drug market and that this segment of the U.S. prescription drug market is not
responsible for reported growth in prices and spending for prescription drugs overall.

Concerns have been raised about whether consumers are benefiting from the price
declines because insurers and prescription benefit managers have been offering products
that increasingly shift costs from insurers to consumers. Our evidence finds that
consumers are experiencing more burdensome cost sharing and that in fact consumers
are bearing a greater share of generic drug costs, yet on balance we find that consumers
have experienced substantial overall price declines for generic drugs.

One important question raised by the differential patterns of price declines
between overall and out-of-pocket consumer prices is how the benefits of price declines
are being shared across the larger supply chain. Our CPI analysis shows that the full
amount of the declines in generic prices is not being passed through to consumers. While
a number of “back of the envelope” estimates have been made about the degree of pass- through of price reductions to consumers,9 we believe more systematic analysis of this phenomenon is needed.10 Moreover, our results also suggest a closer look at the workings
of the entire generic drug supply chain (manufacturer, wholesaler, pharmaceutical benefit
manager, insurer, and retailer) merits attention.

Saturday, August 03, 2019

Climate vs. Climate Alarm

This is the title of a 2011 talk by Richard Lindzen.

RL's conclusion is that the Climate Alarmists are wrong.

Here are some excerpts from the talk.
-----------------------------------------------------------
The public perception of the climate problem is somewhat schizophrenic. On the one hand, the problem is perceived to be so complex that it cannot be approached without massive computer programs. On the other hand, the physics is claimed to be so basic that the dire conclusions commonly presented are considered to be self-evident.

Consistent with this situation, climate has become a field where there is a distinct separation of theory and modeling. Commonly, in fluid mechanics, theory provides useful constraints and tests when applied to modeling results. This has been notably absent in current work on climate.

In this talk, I will try to show how the greenhouse effect actually works using relatively simple basic concepts. We will see that the greenhouse effect, itself, presents little cause for alarm from increasing levels of CO2 since the effect is modest. Concern is associated with the matter of feedbacks that, in models, lead to amplified responses to CO2. Considerations of basic physics (as opposed to simply intercomparing models) suggests that current concerns are likely to be exaggerated. A variety of independent arguments all lead to the same conclusion.

All attempts to estimate how the climate responds to increasing CO2 depend on how the climate greenhouse actually works. Despite the concerns with the greenhouse effect that have dominated environmental thinking for almost a quarter of a century, the understanding of the effect is far from widespread. Part of the reason is that the popular depiction of the effect as resulting from an infrared ‘blanket’ is seriously misleading, and, as a result, much of the opposition that focuses purely on the radiation is similarly incorrect. The following description is, itself, somewhat oversimplified; however, it is probably adequate for understanding the underlying physics.
-----
As we have seen, the simple existence of the greenhouse effect is neither new nor a cause for alarm. The critical issue is one of feedbacks. This is not a technical detail; it is central, and there is ample reason (as we have already seen) to think that current models are substantially exaggerating the feedbacks.
-----
I hope that what has been shown demonstrates that increasing CO2 and greenhouse warming are not at all indicative of alarm, and that there is ample evidence that the system is not particularly sensitive. Moreover, the high sensitivity of some current models would render the stability of the earth over 4.5 billion years dubious. Engineers have long recognized this and generally avoid feedback factors greater than about 0.1.

Friday, August 02, 2019

The Climate Alarmists’ view that CO2 changes cause temperature changes may be wrong

Here is Nir Shaviv's blog article "The inconvenient truth about the Ice core Carbon Dioxide Temperature Correlations.
-----------------------------------------------
One of the "scientific" highlights in Al Gore's movie is the discussion about the clear correlation between CO2 and temperature, as is obtained in ice cores. To quote, he says the following when discussing the ice-core data (about 40 mins after the beginning for the film):

“The relationship is actually very complicated but there is one relationship that is far more powerful than all the others and it is this. When there is more carbon dioxide, the temperature gets warmer, because it traps more heat from the sun inside.”

Any laymen will understand from this statement that the ice-cores demonstrate a causal link, that higher amounts of CO2 give rise to higher temperatures. Of course, this could indeed be the case, and to some extent, it necessarily is. However, can this conclusion really be drawn from this graph? Can one actually say anything at all about how much CO2 affects the global temperature?

To the dismay of Al Gore, the answer is that this graph doesn't prove at all that CO2 has any effect on the global temperature. All it says is that there is some equilibrium between dissolved CO2 and atmospheric CO2, an equilibrium which depends on the temperature. Of course, the temperature itself can depend on a dozen different factors, including CO2, but just the CO2 / temperature correlation by itself doesn't tell you the strength of the CO2→ΔT link. It doesn't even tell you the sign.

Al Gore uses pyrotechnics to lead his audience to the wrong conclusion. If CO2 affects the temperature, as this graph supposedly demonstrates, then the 20th century CO2 rise should cause a temperature rise larger than the rise seen from the last ice-age to today's interglacial. This is of course wrong. All it says is that we offsetted the dissolution balance of CO2 in the oceans. If we were to stop burning fossil fuels (which is a good thing in general, but totally irrelevant here), then the large CO2 increase would turn into a CO2 decrease, returning back to the pre-industrial level over a century or so. Think for example on a closed coke bottle. It has coke with dissolved CO2 and it has air with gaseous CO2. Just like Earth, most of the CO2 is in the dissolved form. If you warm the coke bottle, the coke cannot hold as much CO2, so it releases a little amount and increases the partial pressure of the gaseous CO2, enough to force the rest of the dissolved CO2 to stay dissolved. Since there is much more dissolved CO2 than gaseous CO2, the amount released from the coke is relatively small.

Of course, the comparison can go only so far. The mechanisms governing CO2 in the oceans are much more complicated such that the equilibrium depends on the amount of biological activity, on the complicated chemical reactions in the oceans, and many more interactions I am probably not aware of. For example, a lower temperature can increase the amount of dust reaching the oceans. This will bring more fertilizing iron which will increase the biological activity (since large parts of the ocean's photosynthesis is nutrient limited) and with it affect the CO2 dissolution balance. The bottom line is that the equilibrium is quite complicated to calculate.

Nevertheless, the equilibrium can be empirically determined by simply reading it straight off the ice-core CO2/temperature graph. The global temperature variations between ice-ages and interglacials is about 4°C. The change in the amount of atmospheric CO2 is about 80 ppm. This gives 20 ppm of oceanic out-gassing per °C.

The main evidence proving that CO2 does not control the climate, but at most can play a second fiddle by just amplifying the variations already present, is that of lags. In all cases where there is a good enough resolution, one finds that the CO2 lags behind the temperature by typically several hundred to a thousand years. Namely, the basic climate driver which controls the temperature cannot be that of CO2. That driver, whatever it is, affects the climate equilibrium, and the temperature changes accordingly. Once the oceans adjust (on time scale of decades to centuries), the CO2 equilibrium changes as well. The changed CO2 can further affect the temperature, but the CO2 / temperature correlation cannot be used to say almost anything about the strength of this link. Note that I write "almost anything", because it turns out that the CO2 temperature correlation can be used to say at least one thing about the temperature sensitivity to CO2 variations, as can be seen in the box below.

It is interesting to note that the IPCC scientific report (e.g., the AR4) avoids this question of lag. Instead of pointing it out, they write that in some cases (e.g., when comparing Antarctic CO2 to temperature data) it is hard to say anything definitive since the data sets come from different cores. This is of course chaff to cover the fact that when CO2 and temperature are measured with the same cores, or when carefully comparing different cores, a lag of typically several hundred years is found to be present, if the quality and resolution permit. Such an example is found in the figure below.




Analysis of ice core data from Antarctica by Indermühle et al. (GRL, vol. 27, p. 735, 2000), who find that CO2 lags behind the temperature by 1200±700 years. There are many examples of studies finding lags, a few examples include:
  • Indermühle et al. (GRL, vol. 27, p. 735, 2000), who find that CO2 lags behind the temperature by 1200±700 years, using Antarctic ice-cores between 60 and 20 kyr before present (see figure). 
  • Fischer et al. (Science, vol 283, p. 1712, 1999) reported a time lag 600±400 yr during early de-glacial changes in the last 3 glacial–interglacial transitions. 
  • Siegenthaler et al. (Science, vol. 310, p. 1313, 2005) find a best lag of 1900 years in the Antarctic data. 
  • Monnin et al. (Science vol 291, 112, 2001) find that the start of the CO2 increase in the beginning of the last interglacial lagged the start of the temperature increase by 800 years.
Clearly, the correlation and lags unequivocally demonstrate that the temperature drives changes in the atmospheric CO2 content. The same correlations, however cannot be used to say anything about the temperature's sensitivity to variations in the CO2. I am sure there is some effect in that direction, but to empirically demonstrate it, one needs a correlation between the temperature and CO2 variations, which do not originate from temperature variations. 

The only temperature independent CO2 variations I know of are those of anthropogenic sources, i.e., the 20th century increase, and CO2 variations over geological time scales.

Since the increase of CO2 over the 20th is monotonic, and other climate drivers (e.g., the sun) increased as well, a correlation with temperature is mostly meaningless. This leaves the geological variations in CO2 as the only variations which could be used to empirically estimate the effect of the CO2→ΔT link.

The reason that over geological time scales, the variations do not depend on the temperature is because over these long durations, the total CO2 in the ecosystem varies from a net imbalance between volcanic out-gassing and sedimentation/subduction. This "random walk" in the amount of CO2 is the reason why there were periods with 3 or even 10 times as much CO2 than present, over the past billion years.

Unfortunately, there is no clear correlation between CO2 and temperature over geological time scales. This lack of correlation should have translated into an upper limit on the CO2→ΔT link. However, because the geochemical temperature data is actually biased by the amount of CO2, this lack of correlation result translates into a CO2 doubling sensitivity which is about ΔTx2 ~ 1.0±0.5°C. More about it in this paper.

The moral of this story is that when you are shown data such as the graph by Al Gore, ask yourself what does it really mean. You might be surprised from the answer.

A Climate Denier's CV

Here is a link to Nir Shaviv's CV.

Climate Alarmists would consider NS a Climate Denier.

NS's CV provides some indication that those who are fond of calling others Climate Deniers may be the ones in denial.

Al Sharpton and his Worshipers

Here is Seth Mandel in the JewishWorld Review.

SM is on target.
------------------------------------------------
President Donald Trump has perfected the art of the undorsement, the ability to get his opponents to beatify whoever and whatever he denigrates. Whether a first-term congresswoman, a quarterback or the city of Baltimore, #resistance to his targeting is futile.

Unfortunately, "the enemy of my enemy is my infallible hero" is a terrible approach to politics, aptly demonstrated this week when Trump turned his sights on the public figures who stepped in to defend Baltimore's honor.

Al Sharpton - who should still be seen as a notorious hate figure but has somehow escaped that fate - practically tripped over himself trying to get Trump's attention. It worked. "Al is a con man, a troublemaker, always looking for a score," Trump tweeted. "Just doing his thing. Must have intimidated Comcast/NBC. Hates Whites & Cops!"

Then ensued one of the more depressing news cycles of the year, as major Democratic presidential candidates praised Sharpton to the heavens.

Sharpton "has spent his life fighting for what's right and working to improve our nation, even in the face of hate. It's shameful, yet unsurprising that Trump would continue to attack those who have done so much for our country," tweeted Sen. Kamala D. Harris of California.

Sen. Elizabeth Warren of Massachusetts insisted that Sharpton "has dedicated his life to the fight for justice for all. No amount of racist tweets from the man in the White House will erase that - and we must not let them divide us. I stand with my friend Al Sharpton in calling out these ongoing attacks on people of color."

New York Mayor Bill de Blasio boasted of his decades-long relationship with Sharpton, thanks to which he could attest that "Trump's characterization is not only disrespectful, it's untrue."

Then former Vice President Joe Biden, that great moderate hope, added his voice to the chorus, calling Sharpton "a champion in the fight for civil rights."

Say it ain't so, Joe.

Sharpton is unworthy of such praise, so much so that the decision to back him reflexively is a massive moral demerit. Calling Sharpton a lifelong fighter for "justice" ignores his history of race-baiting and deadly anti-Semitic incitement.

In August 1991, after City College professor Leonard Jeffries ranted that "everyone knows rich Jews helped finance the slave trade" and that "Russian Jewry had a particular control over the movies, and their financial partners, the Mafia, put together a financial system of destruction of black people," Sharpton rushed to his defense, threatening, "If the Jews want to get it on, tell them to pin their yarmulkes back and come over to my house." Days later, a Jewish driver accidentally struck and killed a black 7-year-old named Gavin Cato in Crown Heights, Brooklyn. That set off three days of rioting, in the first hours of which a group of African Americans chanting "Kill the Jew" did just that, beating and stabbing an Orthodox Jew named Yankel Rosenbaum, who died of his injuries.

But Sharpton was only warming up. He led crowds in shouting for "justice" - pay attention here, Sen. Warren - as rioters wantonly beat Jews in the streets to chants of "Heil Hitler." At Cato's funeral, Sharpton poured out every last drop of gasoline he had left: "Talk about how Oppenheimer in South Africa sends diamonds straight to Tel Aviv and deals with the diamond merchants right here in Crown Heights. The issue is not anti-Semitism; the issue is apartheid. . . . All we want to say is what Jesus said: If you offend one of these little ones, you got to pay for it. No compromise, no meetings, no kaffeeklatsch, no skinnin' and grinnin'."

Nor was that an isolated incident. In 1995, Sharpton and his National Action Network colleague Morris Powell agitated against Fred Harari, a Jewish shop owner in Harlem. "We are not going to stand idly by and let a Jewish person come in black Harlem and methodically drive black people out of business up and down 125th Street," Powell said. Sharpton added, "We will not stand by and allow them to move this brother so that some white interloper can expand his business on 125th Street." A few months later, a gunman entered the store and set it ablaze, killing seven and then shooting himself. When the shop reopened, Powell was back at it, warning, "Freddy's not dead."

Sharpton, meanwhile, is free of shame or apology. "You only repent when you mean it, and I have done nothing wrong," he insisted years later. In 2011, he wrote a gobsmacking piece of revisionist history for the New York Daily News, claiming his remarks were being manipulated by "extremist Jews," though he conceded that some of the marchers' rhetoric "played to the extremists rather than raising the issue of the value of this young man whom we were so concerned about." Sharpton then pronounced: "It is not enough to be right. We had our marches, and they were all peaceful." That is, Sharpton doesn't think he's getting enough credit for his behavior.

So how did someone with this record become a figure who could be praised unequivocally by leading presidential candidates and no one bats an eye? The answer is, he won a game of chicken. Sharpton's smartest move was to run for the Democratic presidential nomination in 2004. It put his rivals in a bind: Attacking him on his record risked alienating black voters. But ignoring his record would sanitize it by legitimizing his candidacy and rendering future criticism vulnerable to an effective counter: Why didn't you say it to my face?

Sharpton, much like Trump himself, also made use of the opportunities afforded him by pop culture. During the campaign, in December 2003, Sharpton hosted "Saturday Night Live." Over the years he appeared as himself in shows such as "Boston Legal," "Law & Order: Special Victims Unit" and "Girlfriends," as well as the 2002 Adam Sandler comedy "Mr. Deeds." In 2011, MSNBC gave him his own show, which he hosts to this day, in addition to his participation in live campaign coverage. During the 2008 campaign, Barack Obama leaned on Sharpton to help fend off criticism from other black leaders, and Sharpton visited Obama's White House more than 100 times. Sharpton helped de Blasio's mayoral run in 2013 and was rewarded with unprecedented access.

And that's the most galling part of the mainstreaming of Al Sharpton. He never sought absolution. He simply got away with it.

So at Wednesday night's Democratic presidential debate, no one asked Warren about Sharpton's record or the message she might be sending with such fulsome praise. Nor was South Bend, Indiana, Mayor Pete Buttigieg - who has struck up a very public alliance with Sharpton in an attempt to burnish his standing with black voters - prodded about the hypocrisy on display. Republicans, Buttigieg lectured, "are supporting naked racism in the White House, or at best silent about it. And if you are watching this at home and you are a Republican member of Congress, consider the fact that, when the sun sets on your career and they are writing your story, of all the good and bad things you did in your life, the thing you will be remembered for is whether, in this moment, with this president, you found the courage to stand up to him or you continued to put party over country."

What would Buttigieg say about his own support of a public figure with a long history of bigotry? We don't know, because no one thought to ask him at the debate. (I have repeatedly asked his campaign for comment, to no avail.)

We are routinely told that harsh criticism of minority members of Congress amounts to incitement to violence. What of Sharpton, who initially made his career out of explicit incitement to violence? This is no idle concern. "The increase in the number of physical assaults against Orthodox Jews in New York City is a matter of empirical fact," reports Armin Rosen at Tablet. "Anti-Semitic hate crimes against persons, which describes nearly everything involving physical contact, jumped from 17 in 2017 to 33 in 2018, with the number for the first half of 2019 standing at 19, according to the NYPD's hate crime unit. . . . And yet, many believe the attacks are even more widespread than has been reported." De Blasio claims anti-Semitism is a right-wing phenomenon, but in New York, Rosen writes, "the perpetrators who have been recorded on CCTV cameras are overwhelmingly black and Hispanic."

You can believe that Jewish lives matter, or you can pepper your public career with slavish fan fiction about Al Sharpton. When the sun sets on the careers of this crop of Democrats and their stories are written, what will the record show about the choice they made?