Sunday, August 25, 2019

More reason to question the extent of anthropogenic climate change

Here is a link to a paper by Kauppinen and Malmi, "No Experimental Evidence For The Significant Anthropogenic Climate Change".

The failure of Climate Alarmists to contemplate the impact of the sun on climate is a fatal flaw.  This conclusion is shared by some of the World's top scientists working on climate change, e.g., Nir Shaviv and Henrik Svensmark.

Here are some excerpts from the paper.
-----------------------------------------
Abstract.

In this paper we will prove that GCM-models used in IPCC report AR5 fail to calculate the influences of the low cloud cover changes on the global temperature. That is why those models give a very small natural temperature change leaving a very large change for the contribution of the green house gases in the observed temperature. This is the reason why IPCC has to use a very large sensitivity to compensate a too small natural component. Further they have to leave out the strong negative feedback due to the clouds in order to magnify the sensitivity. In addition, this paper proves that the changes in the low cloud cover fraction practically control the global temperature.

The climate sensitivity has an extremely large uncertainty in the scientific literature. The smallest values estimated are very close to zero while the highest ones are even 9 degrees Celsius for a doubling of CO2. The majority of the papers are using theoretical general circulation models (GCM) for the estimation. These models give very big sensitivities with a very large uncertainty range. Typically sensitivity values are between 2–5 degrees. IPCC uses these papers to estimate the global temperature anomalies and the climate sensitivity. However, there are a lot of papers, where sensitivities lower than one degree are estimated without using GCM. The basic problem is still a missing experimental evidence of the climate sensitivity. One of the authors (JK) worked as an expert reviewer of IPCC AR5 report. One of his comments concerned the missing experimental evidence for the very large sensitivity presented in the report [1]. As a response to the comment IPCC claims that an observational evidence exists for example in Technical Summary of the report. In this paper we will study the case carefully.

2. Low cloud cover controls practically the global temperature

The basic task is to divide the observed global temperature anomaly into two parts: the natural component and the part due to the green house gases. In order to study the response we have to re-present Figure TS.12 from Technical Summary of IPCC AR5 report (1). This figure is Figure 1. Here we highlight the subfigure “Land and ocean surface” in Figure 1. Only the black curve is an observed temperature anomaly in that figure. The red and blue envelopes are computed using climate models. We do not consider computational results as experimental evidence. Especially the results obtained by climate models are questionable because the results are conflicting with each other.

In Figure 2 we see the observed global temperature anomaly (red) and global low cloud cover changes (blue). These experimental observations indicate that 1 % increase of the low cloud cover fraction decreases the temperature by 0.11°C. This number is in very good agreement with the theory given in the papers [3, 2, 4]. Using this result we are able to present the natural temperature anomaly by multiplying the changes of the low cloud cover by −0.11°C/%. This natural contribution (blue) is shown in Figure 3 superimposed on the observed temperature anomaly (red). As we can see there is no room for the contribution of greenhouse gases i.e. anthropogenic forcing within this experimental accuracy. Even though the monthly temperature anomaly is very noisy it is easy to notice a couple of decreasing periods in the increasing trend of the temperature. This behavior cannot be explained by the monotonically increasing concentration of CO2 and it seems to be far beyond the accuracy of the climate models.

The red curve in Figures 2 and 3 corresponds to the black curve, between years 1983 and 2008, in the above-mentioned subfigure “Land and ocean surface”. If the clouds and CO2 were taken into account correctly in the climate models both the blue and red envelopes should overlap the observed black curve. As we see the trend of the blue envelope is more like decreasing. We suggest this is due to a wrong or missing processing of the low cloud cover contribution. In the report AR5 it is even recognized that the low clouds give the largest uncertainty in computation. In spite of this IPCC still assumes that the difference between the blue and red envelopes in Figure 1 is the contribution of greenhouse gases. Unfortunately, the time interval (1983–2008) in Fig 2 is limited to 25 years because of the lack of the low cloud cover data. During this time period the CO2 concentration increased from 343 ppm to 386 ppm and both Figures 1 (IPCC) and 2 show the observed temperature increase of about 0.4°C. The actual global temperature change, when the concentration of CO2 raises from C0 to C, is

∆T =∆T2CO2 lnC/C0 ln2 −11°C·∆c,          (1)

where ∆T2CO2 is the global temperature change, when the CO2 concentration is doubled and ∆c is the change of the low cloud cover fraction. The first and second term are the contributions of CO2 [5] and the low clouds, respectively. Using the sensitivity ∆T2CO2 = 0.24°C derived in the papers [3, 2, 4] the contribution of greenhouse gases to the temperature is only about 0.04°C according to the first term in the above equation. This is the reason why we do not see this small increase in temperature in Figure 3, where the temperature anomaly is quite noisy with one month time resolution. It is clearly seen in Figure 2 that the red and blue anomalies are like mirror images. This means that the first term is much smaller than the absolute value of the second term (11°C·∆c) in equation (1).

It turns out that the changes in the relative humidity and in the low cloud cover depend on each other [4]. So, instead of low cloud cover we can use the changes of the relative humidity in order to derive the natural temperature anomaly. According to the observations 1 % increase of the relative humidity decreases the temperature by 0.15°C, and consequently the last term in the above equation can be approximated by −15°C∆φ, where ∆φ is the change of the relative humidity at the altitude of the low clouds.

Figure 4 shows the sum of the temperature changes due to the natural and CO2 contributions compared with the observed temperature anomaly. The natural component has been calculated using the changes of the relative humidity. Now we see that the natural forcing does not explain fully the observed temperature anomaly. So we have to add the contribution of CO2 (green line), because the timeinterval is now 40 years (1970–2010). The concentration of CO2 has now increased from 326 ppm to 389 ppm. The green line has been calculated using the sensitivity 0.24°C, which seems to be correct. In Fig. 4 we see clearly how well a change in the relative humidity can model the strong temperature minimum around the year 1975. This is impossible to interpret by CO2 concentration. 

The IPCC climate sensitivity is about one order of magnitude too high, because a strong negative feedback of the clouds is missing in climate models. If we pay attention to the fact that only a small part of the increased CO2 concentration is anthropogenic, we have to recognize that the anthropogenic climate change does not exist in practice. The major part of the extra CO2 is emitted from oceans [6], according to Henry‘s law. The low clouds practically control the global average temperature. During the last hundred years the temperature is increased about 0.1°C because of CO2. The human contribution was about 0.01°C.

3. Conclusion
We have proven that the GCM-models used in IPCC report AR5 cannot compute correctly the natural component included in the observed global temperature. The reason is that the models fail to derive the influences of low cloud cover fraction on the global temperature. A too small natural component results in a too large portion for the contribution of the greenhouse gases like carbon dioxide. That is why IPCC represents the climate sensitivity more than one order of magnitude larger than our sensitivity 0.24°C. Because the anthropogenic portion in the increased CO2 is less than 10 %, we have practically no anthropogenic climate change. The low clouds control mainly the global temperature.

Friday, August 23, 2019

Climate Change: What's the Worst Case?

Here is a blog entry from Judith Curry's blog.  JC is a recognized expert in the field.

JC is on target.

The message: The Alarmists are "Alarmists".  The climate change stories you usually hear about are not credible.

My prediction is that within ten years, the current "scientific consensus" you hear about will be proven to be a combination of bad physics and bad statistics.

JC's comments at the end of the  blog entry about the "peer" reactions she has experienced are telling.  Alarmist bias among climate change "scientists" is rampant.
-------------------------------------------
My new manuscript is now available.

A link to my new paper ‘Climate Change: What’s the Worst Case?’ is provided here [worst case paper final (1)]

A few words on the intended audience and motivation for writing this:

First and foremost, this is written for the clients of Climate Forecast Applications Network who are interested in scenarios of future climate change [link]

Second, this paper is written as a contribution to my series of academic papers on the topic of uncertainty in climate science:
Climate Science and the Uncertainty Monster
Reasoning About Climate Uncertainty
Nullifying the Climate Null Hypothesis
Climate Change: No Consensus on Consensus
Climate Uncertainty and Risk

Third, the paper is written to inform the public debate on climate change and policy makers. I am ever hopeful that some sanity can be interjected into all this.

This paper is particularly relevant in light on the preceding post on consensus, and Gavin’s desire for a better way to treat the extreme tails.

Overview of contents

I’m reproducing the Abstract, Introduction and Conclusions in this blog post, I encourage you to read the entire paper.

Abstract. The objective of this paper is to provide a broader framing for how we assess and reason about possible worst-case outcomes for 21st century climate change. A possibilistic approach is proposed as a framework for summarizing our knowledge about projections of 21st century climate outcomes. Different methods for generating and justifying scenarios of future outcomes are described. Consideration of atmospheric emissions/concentration scenarios, equilibrium climate sensitivity, and sea-level rise projections illustrate different types of constraints and uncertainties in assessing worst-case outcomes. A rationale is provided for distinguishing between the conceivable worst case, the possible worst case and the plausible worst case, each of which plays different roles in scientific research versus risk management.

1.Introduction

The concern over climate change is not so much about the warming that has occurred over the past century. Rather, the concern is about projections of 21st century climate change based on climate model simulations of human-caused global warming, particularly those driven by the RCP8.5 greenhouse gas concentration scenario.

The Intergovernmental Panel on Climate Change (IPCC) Assessment Reports have focused on assessing a likely range (>66% probability) for projections in response to different emissions concentration pathways. Oppenheimer et al. (2007) contends that the emphasis on consensus in IPCC reports has been on expected outcomes, which then become anchored via numerical estimates in the minds of policy makers. Thus, the tails of the distribution of climate impacts, where experts may disagree on likelihood or where understanding is limited, are often understated in the assessment process, and then exaggerated in public discourse on climate change.

In an influential paper, Weitzman (2009) argued that climate policy should be directed at reducing the risks of worst-case outcomes, not at balancing the most likely values of costs and benefits. Ackerman (2017) has argued that policy should be based on the credible worst-case outcome. Worst-case scenarios of 21st century sea level rise are becoming anchored as outcomes that are driving local adaptation plans (e.g. Katsman et al. 2011). Projections of future extreme weather/climate events driven by the worst-case RCP8.5 scenario are highly influential in the public discourse on climate change (e.g. Wallace-Wells, 2019).

The risk management literature has discussed the need for a broad range of scenarios of future climate outcomes (e.g., Trutnevyte et al. 2016). Reporting the full range of plausible and possible outcomes, even if unlikely, controversial or poorly understood, is essential for scientific assessments for policy making. The challenge is to articulate an appropriately broad range of future scenarios, including worst-case scenarios, while rejecting impossible scenarios.

How to rationally make judgments about the plausibility of extreme scenarios and outcomes remains a topic that has received too little attention. Are all of the ‘worst-case’ climate outcomes described in assessment reports, journal publications and the media, actually plausible? Are some of these outcomes impossible? On the other hand, are there unexplored worst-case scenarios that we have missed, that could turn out to be real outcomes? Are there too many unknowns for us to have confidence that we have credibly identified the worst case? What threshold of plausibility or credibility should be used when assessing these extreme scenarios for policy making and risk management?

This paper explores these questions by integrating climate science with perspectives from the philosophy of science and risk management. The objective is to provide a broader framing of the 21st century climate change problem in context of how we assess and reason about worst-case climate outcomes. A possibilistic framework is articulated for organizing our knowledge about 21st century projections, including how we extend partial positions in identifying plausible worst-case scenarios of 21st climate change. Consideration of atmospheric emissions/concentration scenarios, equilibrium climate sensitivity, and sea-level rise illustrate different types of constraints and uncertainties in assessing worst-case outcomes. This approach provides a rationale for distinguishing between the conceivable worst case, the possible worst case and the plausible worst case, each of which plays different roles in scientific research versus risk management.

2. Possibilistic framework

3. Scenarios of future outcomes

3.1 Scenario justification

3.2 Worst-case classification

3.3 Alternative scenarios

4. Is RCP8.5 plausible?

5. Climate sensitivity

6. Sea level rise

6.1 Worst-case scenarios

6.2 Possibility distribution

6.3 Alternative scenarios

7. Conclusions

The purpose of generating scenarios of future outcomes is that we should not be too surprised when the future eventually arrives. Projections of 21st century climate change and sea level rise are associated with deep uncertainty and rapidly advancing knowledge frontiers. The objective of this paper has been to articulate a strategy for portraying scientific understanding of the full range of possible scenarios of 21st century climate change and sea level rise in context of a rapidly expanding knowledge base, with a focus on worst-case scenarios.

A classification of future scenarios is presented, based on relative immunity to rejection relative to our current background knowledge and assessments of the knowledge frontier. The logic of partial positions allows for clarifying what we actually know with confidence, versus what is more speculative and uncertain or impossible. To avoid the Alice in Wonderland syndrome of scenarios that include too many implausible assumptions, published worst-case scenarios are assessed using the plausibility criterion of including only one borderline implausible assumption (where experts disagree on plausibility).

The possibilistic framework presented here provides a more nuanced way for articulating our foreknowledge than either by attempting, on the one hand, to construct probabilities of future outcomes, or on the other hand simply by labeling some statements about the future as possible. The possibilistic classification also avoids ignoring scenarios or classifying them as extremely unlikely if they are driven by processes that are poorly understood or not easily quantified.

The concepts of the possibility distribution, worst-case scenarios and partial positions are relevant to decision making under deep uncertainty (e.g. Walker et al. 2016), where precautionary and robust approaches are appropriate. Consideration of worst-case scenarios is an essential feature of precaution. A robust policy is defined as yielding outcomes that are deemed to be satisfactory across a wide range of plausible future outcomes. Robust policy making interfaces well with possibilistic approaches that generate a range of possible futures (e.g. Lempert et al. 2012). Partial positions are of relevance to flexible defense measures in the face of deep uncertainty in future projections (e.g. Oppenheimer and Alley, 2017).

Returning to Ackerman’s (2017) argument that policy should be based on the credible worst-case outcome, the issue then becomes how to judge what is ‘credible.’ It has been argued here that a useful criterion for a plausible (credible) worst-case climate outcome is that at most one borderline implausible assumption – defined as an assumption where experts disagree as to whether or not it is plausible – is included in developing the scenario. Using this criterion, the following summarizes my assessment of the plausible (credible) worst-case climate outcomes, based upon our current background knowledge: 
  • The largest rates of warming that are often cited in impact assessment analyses (e.g. 4.5 or 5 oC) rely on climate models being driven by a borderline implausible concentration/emission scenarios (RCP8.5). 
  • The IPCC AR5 (2013) likely range of warming at the end of the 21st century has a top-range value of 3.1 oC, if the RCP8.5-derived values are eliminated. Even the more moderate amount of warming of 3.1oC relies on climate models with values of the equilibrium climate sensitivity that are larger than can be defended based on analysis of historical climate change. Further, these rates of warming explicitly assume that the climate of the 21st century will be driven solely by anthropogenic changes to the atmospheric concentration, neglecting 21st century variations in the sun and solar indirect effects, volcanic eruptions, and multi-decadal to millennial scale ocean oscillations. Natural processes have the potential to counteract or amplify the impacts of any manmade warming. 
  • Estimates of 21st century sea level rise exceeding 1 m require at least one borderline implausible or very weakly justified assumption. Allowing for one borderline implausible assumption in the sea level rise projection produces high-end estimates of sea level rise of 1.1 to 1.6 m. Higher estimates are produced using multiple borderline implausible or very weakly justified assumptions. The most extreme of the published worst-case scenarios require a cascade of events, each of which are extremely unlikely to borderline impossible based on our current knowledge base. However, given the substantial uncertainties and unknowns surrounding ice sheet dynamics, these scenarios should not be rejected as impossible.
The approach presented here is very different from the practice of the IPCC assessments and their focus on determining a likely range driven by human-caused warming. In climate science there has been a tension between the drive towards consensus to support policy making versus exploratory speculation and research that pushes forward the knowledge frontier (e.g. Curry and Webster, 2013). The possibility analysis presented here integrates both approaches by providing a useful framework for integrating expert speculation and model simulations with more firmly established theory and observations. This approach demonstrates a way of stratifying the current knowledge base that is consistent with deep uncertainty, disagreement among experts and a rapidly evolving knowledge base. Consideration of a more extensive range of future scenarios of climate outcomes can stimulate climate research as well as provide a better foundation for robust decision making under conditions of deep uncertainty.

Publication status

Since I resigned my faculty position, there has been little motivation for me to publish in peer reviewed journals. And I don’t miss the little ‘games’ of the peer review process, not to mention the hostility and nastiness of editors and reviewers who have an agenda.

However, one of my clients wants me to publish more journal articles. This client particularly encouraged me to publish something related to my Special Report on Sea Level and Climate Change. I submitted a shorter version of this paper, in a more academic style, for publication in a climate journal. It was rejected. Here is my ‘favorite’ comment from one of the reviewers:

“Overall, there is the danger that the paper is used by unscrupulous people to create confusion or to discredit climate or sea-level science. Hence, I suggest that the author reconsiders the essence of its contribution to the scientific debate on climate and sea-level science.”

You get the picture. I can certainly get some version of this published somewhere, but this review reminded me why I shouldn’t bother with official ‘peer review.’ Publishing my research on Climate Etc. and as Reports ‘published’ by my company allows me to write my papers in a longer format, including as many references as I want. I can also ‘editorialize’ as I deem appropriate. In summary, I can write what I want, without worrying about the norms and agendas of the ‘establishment.’ Most of my readers want to read MY judgments, rather than something I think I can get past ‘peer reviewers.’

This particular paper is titled as a ‘Working Paper’, in the tradition often used by economists and legal scholars in issuing their reports. It is publicly available for discussion, and I can revise it when appropriate. I hope it will stimulate people to actually think about these issues and discuss them. I look forward to a lively review of this paper.

And finally, it is difficult to see how this paper could be categorized at ‘contrarian.’ It is not even ‘lukewarm.’ It discusses worst-case scenarios, and how to think about their plausibility. In fact, in one of the threads at WUWT discussing one of my previous ‘worst-case’ posts, commenters thought that this was way too ‘alarmist’ to be posted at WUWT.

Bottom line: we need to think harder and differently about climate change. This paper helps provide a framework for stepping beyond the little box that we are currently caught in.

Thursday, August 22, 2019

A further decline in our freedom

Here is Judge Andrew Napolitano "More Spying and Lying"

JAN is on target.

We are no longer a free country and it is getting less free.
----------------------------------------------------
While most of us have been thinking about the end of summer and while the political class frets over the Democratic presidential debates and the aborted visit of two members of Congress to Israel, the Trump administration has quietly moved to extend and make permanent the government's authority to spy on all persons in America.

The president, never at a loss for words, must have been asked by the intelligence community he once reviled not to address these matters in public.

These matters include the very means and the very secret court about which he complained loud and long during the Mueller investigation. Now, he wants to be able to unleash permanently on all of us the evils he claims were visited upon him by the Obama-era FBI and by his own FBI. What's going on?

Here is the backstory.

After the lawlessness of Watergate had been exposed — a president spying on his political adversaries without warrants in the name of national security — Congress enacted in 1978 the Foreign Intelligence Surveillance Act. It prescribed a means for surveillance other than that which the Constitution requires.

The Fourth Amendment to the Constitution — written in the aftermath of British soldiers and agents using general warrants obtained from a secret court in London to spy on whomever in the colonies they wished and to seize whatever they found — was ratified as part of the Bill of Rights to limit the government's ability to intrude upon the privacy of all persons, thereby prohibiting those procedures used by the British.

Thus, we have the constitutional requirements that no searches and seizures can occur without a warrant issued by a judge based on a showing, under oath, of probable cause of crime. The courts have uniformly characterized electronic surveillance as a search.

I am not addressing eyesight surveillance on a public street. I am addressing electronic surveillance wherever one is when one sends or receives digital communications. FISA is an unconstitutional congressional effort to lower the standards required by the Fourth Amendment from probable cause of crime to probable cause of foreign agency.

Can Congress do that? Can it change a provision of the Constitution? Of course not. If it could, we wouldn't have a Constitution.

It gets worse.

The court established by FISA — that's the same court that President Donald Trump asserts authorized spying on him in 2015 and 2016 — has morphed the requirement of probable cause of being a foreign agent to probable cause of communicating with a foreign person as the standard for authorizing surveillance.

What was initially aimed at foreign agents physically present in the United States has secretly become a means to spy on innocent Americans. In Trump's case, the FISA court used the foreign and irrelevant communications of two part-time campaign workers to justify surveillance on the campaign.

Add to all this the 2002 secret order of President George W. Bush directing the National Security Agency to spy on all in America all the time without warrants — this is what Edward Snowden exposed in 2013 — and one can see what has happened.

What happened?

What happened was the creation of a surveillance state in America that came about by secret court rulings and a once-secret presidential order. As a result of this, part of the government goes to the secret FISA court and obtains search warrants on flimsy and unconstitutional grounds and part of the government bypasses FISA altogether and spies on everyone in America and denies it and lies about it.

Bill Binney, the genius mathematician who once worked for the NSA and now is its harshest critic, has stated many times that, as unconstitutional as FISA is, it is a pretext to NSA spying on all persons in America all the time.

How pervasive is this unlawful spying? According to Binney, the NSA's 60,000 domestic spies capture the content and the keystrokes of every communication transmitted on fiber optic cables into or out of or wholly within the United States. And they do so 24/7 — without warrants.

Now, back to that quiet late summer proposal by the Trump administration. Some of the statutes that govern who can go to the FISA court and under what circumstances they can go are about to expire. Inexplicably, the president once victimized by FISA wants to make these statutes permanent. And he wants to do so knowing that they are essentially a facade for spying. That would institutionalize the now decades-long federal assault on privacy and evasion of constitutional norms.

It would also place Trump in the same category as his two immediate predecessors, who regularly ordered government agents to violate the Fourth Amendment and then denied they had done so.

Some of my Fox colleagues joke with me that I am shoveling against the tide when it comes to defending the right to privacy. They claim that there is no more privacy. I disagree with them. As long as we still have a Constitution, it must be taken seriously and must mean what it says. And its intentionally stringent requirements for enabling the government to invade privacy remain the law of the land. The president has sworn to uphold the Constitution, not the NSA.

The late Supreme Court Justice George Sutherland once wrote that we cannot pick and choose which parts of the Constitution to follow and which to ignore. If we could, the Constitution would be meaningless.

Did he foresee our present woes when he wrote, "If the provisions of the Constitution be not upheld when they pinch as well as when they comfort, they may as well be abandoned"?

Is that where we are headed?

Saturday, August 17, 2019

Climate and cosmic rays

Here is a very interesting partly biographical article by Nir Shaviv titled "How Might Climate be Influenced by Cosmic Rays".

This influence has been virtually totally ignored by the vast majority of climate scientists - no wonder the climate models quoted so often don't work well.

Nir Shaviv, IBM Einstein Fellow and Member in the School of Natural Sciences, is focusing on cosmic ray diffusion in the dynamic galaxy, the solar cosmic ray–­climate link, and the appearance of extremely luminous (super-Eddington) states in stellar evolution during his stay at the Institute of Advanced Study at Princeton. Shaviv is Professor at the Racah Institute of Physics at the Hebrew University of Jerusalem.
---------------------------------------------------
In 1913, Victor Hess measured the background level of atmospheric ionization while ascending with a balloon. By doing so, he discovered that Earth is continuously bathed in ionizing radiation. These cosmic rays primarily consist of protons and heavier nuclei with energies between their rest mass and a trillion times larger. In 1934, Walter Baade and Fritz Zwicky suggested that cosmic rays originate from supernovae, the explosive death of massive stars. However, only in 2013 was it directly proved, using gamma-ray observations with the FERMI satellite, that cosmic rays are indeed accelerated by supernova remnants. Thus, the amount of ionization in the lower atmosphere is almost entirely governed by supernova explosions that took place in the solar system’s galactic neighborhood in the past twenty million years or so.

Besides being messengers from ancient explosions, cosmic rays are extremely interesting because they link together so many different phenomena. They tell us about the galactic geography, about the history of meteorites or of solar activity, they can potentially tell us about the existence of dark matter, and apparently they can even affect climate here on Earth. They can explain many of the past climate variations, which in turn can be used to study the Milky Way.

The idea that cosmic rays may affect climate through modulation of the cosmic ray ionization in the atmosphere goes back to Edward Ney in 1959. It was known that solar wind modulates the flux of cosmic rays reaching Earth—a high solar activity deflects more of the cosmic rays reaching the inner solar system, and with it reduces the atmospheric ionization. Ney raised the idea that this ionization could have some climatic effect. This would immediately link solar activity with climate variations, and explain things like the little ice age during the Maunder minimum, when sunspots were a rare occurrence on the solar surface.

In the 1990s, Henrik Svensmark from Copenhagen brought the first empirical evidence of this link in the form of a correlation between cloud cover and the cosmic ray flux variations over the solar cycle. This link was later supported with further evidence including climate correlations with cosmic ray flux variations that are independent of solar activity, as I describe below, and, more recently, with laboratory experiments showing how ions play a role in the nucleation of small aerosols and their growth to larger ones.

In 2000, I was asked by a German colleague about possible effects that supernovae could have on life on Earth. After researching a bit, I stumbled on Svensmark’s results and realized that the solar system’s galactic environment should be changing on time scales of tens of millions of years. If cosmic rays affect the terrestrial climate, we should see a clear signature of the galactic spiral arm passages in the paleoclimatic data, through which we pass every 150 million years. This is because spiral arms are the regions where most supernovae take place in our galaxy. Little did I know, it would take me on a still ongoing field trip to the Milky Way.

The main evidence linking the galactic environment and climate on Earth is the exposure ages of iron meteorites. Exposure ages of meteorites are the inferred duration between their breakup from their parent bodies and their penetration into Earth’s atmosphere. They are obtained by measuring the radioactive and stable isotopes accumulated through interaction with the cosmic rays perfusing the solar system. It turns out that if one looks at exposure ages a bit differently than previously done, by assuming that meteorites form at a statistically constant rate while the cosmic ray flux can vary, as opposed to the opposite, then the cosmic ray flux history can be reconstructed. It exhibits seven clear cycles, which coincide with the seven periods of ice-age epochs that took place over the past billion years. On longer time scales, it is possible to reconstruct the overall cosmic ray flux variations from a changed star formation rate in the Milky Way, though less reliably. The variable star formation rate can explain why ice-age epochs existed over the past billion years and between one and two billion years ago, but not in other eons.

I later joined forces with Canadian geochemist Ján Veizer who had the best geochemical reconstruction of the temperature over the past half billion years, during which multicellular life left fossils for his group to dig and measure. His original goal was to fingerprint the role of CO2 over geological time scales, but no correlation with the paleotemperature was apparent. On the other hand, his temperature reconstruction fit the cosmic ray reconstruction like a glove. When we published these results, we instantly became personae non gratae in certain communities, not because we offered a data-supported explanation to the long-term climate variations, but because we dared say that CO2 can at most have a modest effect on the global temperature.

Besides the spiral arm passages, our galactic motion should give rise to a faster cosmic ray flux modulation—in addition to the solar system’s orbit around the galaxy, with roughly a 250-million-year period, the solar system also oscillates perpendicular to the galactic plane. Since the cosmic ray density is higher at the plane, it should be colder every time the solar system crosses it, which depending on the exact amount of mass in the disk should be every 30 to 40 million years.

A decade ago, the geochemical climate record showed hints of a 32-million-year periodicity, with peak cooling taking place a few million years ago, as expected from the last plane passage. Together with Veizer and a third colleague, Andreas Prokoph, we then submitted a first version for publication. However, we actually ended up putting the paper aside for almost a decade because of two nagging inconsistencies.

First, analysis of the best database of the kinematics of nearby stars, that of the Hipparcos satellite, pointed to a low density at the galactic plane, which in turn implied a longer period for the plane crossings, around once every 40 million years. Second, it was widely accepted in the cosmic ray community that cosmic rays should be diffusing around the galactic disk in a halo that is much larger than the stellar disk itself. This would imply that the 300 light years that the solar system ventures away from the galactic plane could not explain the 1 to 2°C variations implied for the geochemical record. Without a way to reconcile these, there was not much we could do. Perhaps the 32 million years was just a random artifact.

As time progressed, however, the improved geochemical record only showed that the 32-million-year signal became more prominent. In fact, fifteen cycles could now be clearly seen in the data. But something else also happened. My colleagues and I began to systematically study cosmic ray diffusion in the Milky Way while alleviating the standard assumption that everyone had until then assumed—that the sources are distributed symmetrically around the galaxy. To our surprise, it did much more than just explain the meteoritic measurements of a variable cosmic ray flux. It provided an explanation to the so-called Pamela anomaly, a cosmic ray positron excess that was argued by many to be the telltale signature of dark matter decay. It also explained the behavior of secondary cosmic rays produced along the way. But in order for the results to be consistent with the range of observations, the cosmic ray diffusion model had to include a smaller halo, one that more closely resembles the disk. In such a halo, the vertical oscillation of the solar system should have left an imprint in the geochemical record not unlike the one detected.

Thus, armed with the smaller halo and a more prominent paleoclimate signal, we decided to clear the dust off the old paper. The first surprise came when studying the up-to-date data. It revealed that the 32-million-year signal also has a secondary frequency modulation, that is, it is periodically either slower or longer. This modulation has a period and phase corresponding to the radial oscillations that the solar system exhibits while revolving around the galaxy. When it is closer to the galactic center, the higher density at the galactic plane forces it to oscillate faster, while when far from the center, the density is lower and the oscillation period is longer.

The second surprise came when studying the stellar kinematics from the astrometric data. We found that the previous analysis, which appeared to have been inconsistent, relied on the assumption that the stars are more kinematically relaxed then they are. As a consequence, there was a large unaccounted systematic error—without it there was no real inconsistency. It took almost a decade, but things finally fell into place.

The results have two particularly interesting implications. First, they bring yet another link between the galactic environment and the terrestrial climate. Although there is no direct evidence that cosmic rays are the actual link on the 32-million-year time scale, as far as we know, they are the only link that can explain these observations. This in turn strengthens the idea that cosmic ray variations through solar activity affect the climate. In this picture, solar activity increase is responsible for about half of the twentieth-century global warming through a reduction of the cosmic ray flux, leaving less to be explained by anthropogenic activity. Also, in this picture, climate sensitivity is on the low side (perhaps 1 to 1.5°C increase per CO2 doubling, compared with the 1.5 to 4.5°C range advocated by the IPCC), implying that the future is not as dire as often prophesied.

The second interesting implication is the actual value of the 32-million-year oscillation. The relatively short period indicates that there is more mass in the galactic plane than accounted for in stars and interstellar gas, leaving the remainder as dark matter. However, this amount of dark matter is more than would be expected if it were distributed sparsely in a puffed-up halo as is generally expected. In other words, this excess mass requires at least some of the dark matter to condense into the disk. If correct, it will close a circle that started in the 1960s when Edward Hill and Jan Oort suggested, based on kinematic evidence, that there is more matter at the plane than observed. This inconsistency and indirect evidence for dark matter was also advocated by John Bahcall, who for many years was a Faculty member here at the IAS.

It should be noted that the idea that cosmic rays affect the climate is by no means generally accepted. The link is contentious and it has attracted significant opponents over the years because of its ramifications to our understanding of recent and future climate change. For it to be finally accepted, one has to understand all the microphysics and chemistry associated with it. For this reason, we are now carrying out a lab experiment to pinpoint the mechanism responsible for linking atmospheric ions and cloud condensation nuclei. This should solidify a complete theory to explain the empirical evidence.

As for the existence of more dark matter in the galactic plane than naively expected, we will not have to wait long for it to be corroborated (or refuted). The GAIA astrometric satellite mapping the kinematics of stars to unprecedented accuracy will allow for a much better measurement of the density at the plane. The first release of data is expected to be in 2016, just around the corner.

The truth about climate change

For those of you who are interested in the truth about the state of climate change science, here is a link to a 22 minute talk by Nir Shaviv.

This talk is EXCELLENT.

Professor Nir Joseph Shaviv is an Israeli‐American physics professor. He is professor at the Racah Institute of Physics of the Hebrew University of Jerusalem, of which he is now its chairman.

Shaviv started taking courses at the Israel Institute of Technology in Haifa at age 13. He graduated with a BA in physics in 1990, and finished as best in class. During his military service (1990–93) he continued his studies and co-authored his first papers in astrophysics. In 1994 he received a Master of Science in physics and a doctorate during 1994–96. During 1996–99 he was a Lee DuBridge Prize Fellow at Caltech's TAPIR (Theoretical Astrophysics) group. During 1999–2001 he was in a postdoctorate position at the Canadian Institute for Theoretical Astrophysics. In 2001–6 he was a senior lecturer at Racah Institute of physics at the Hebrew University of Jerusalem. In 2006-2012 he was an associate professor, and full professor since 2012. Between 2008 and 2011 he was the head of the faculty union of the Hebrew University, and he served as the chairman of coordinating council of faculty unions between 2010 and 2014. In 2014 he became a member of the Institute for Advanced Study in Princeton, and chairman of The Racah Institute of Physics in 2015.

I have provided NS's background so that you will be able to differentiate him from the ninnies you usually hear from about climate change.

Thursday, August 15, 2019

Red Flag laws are an unconstitutional knee-jerk reaction that reduce everyone’s freedom.

Don't take my word for it - here is what Judge Andrew Napolitano thinks.
-----------------------------------------------
When tragedy strikes, as it did in two mass killings earlier this month, there is always the urge to pressure the government do something. Governments are animated by the belief that doing something — any demonstrable overt behavior — will show that they are in control. I understand the natural fears that good folks have that an El Paso or a Dayton episode might happen again, but doing something for the sake of appearance can be dangerous to personal liberty.

When the Constitution was written, the idea of owning arms and keeping them in the home was widespread. The colonists had just defeated the armies of King George III. The colonial weapon of choice was the Kentucky long rifle, while British soldiers used their army-issued version of Brown Bessies. Each rifle had its advantages, but the Kentucky (it was actually a German design, perfected and manufactured in Pennsylvania) was able to strike a British soldier at 200 yards, a startlingly long distance at the time. The Bessies were good for only about 80 yards.

Put aside the advantages we had of the passionate defense of freedom and homeland, to say nothing of superior leadership, it doesn't take any advanced understanding of mathematics or ballistics to appreciate why we won the Revolution.

It is a matter of historical fact that the colonists won the war largely by superior firepower.

Six years after the war was over, delegates met in Philadelphia in secret and drafted what was to become the Constitution. The document, largely written in James Madison's hand, was then submitted to Congress and to the states, which began the process of ratification.

By then, Americans had already formed two basic political parties. The Federalists wanted a muscular central government and the Anti-Federalists wanted a loose confederation of states. Yet the memory of a Parliament that behaved as if it could write any law, tax any event and impair any liberty, coupled with the fear that the new government here might drift toward tyranny, gave birth to the first 10 amendments to the Constitution — the Bill of Rights.

The debate over the Bill of Rights was not about rights; that debate had been resolved in 1776 when the Declaration of Independence declared our basic human rights to be inalienable. The Bill of Rights debates were about whether the federal government needed restraints imposed upon it in the Constitution itself.

The Federalists thought the Bill of Rights was superfluous because they argued that no American government would knowingly restrict freedom. The Anti-Federalists thought constitutional restraints were vital to the preservation of personal liberty because no government can be trusted to preserve personal liberty.

Second among the personal liberties preserved in the Bill of Rights from impairment by the government was the right to self-defense. Thomas Jefferson called that the right to self-preservation.

Fast-forward to today, and we see the widespread and decidedly un-American reaction to the tragedies of El Paso, Texas, and Dayton, Ohio. Even though both mass murders were animated by hatred and planned by madness, because both were carried out using weapons that look like those issued by the military, Democrats have called for the outright confiscation of these weapons.

Where is the constitutional authority for that? In a word: nowhere.

The government's job is to preserve personal liberty. Does it do its job when it weakens personal liberty instead? Stated differently, how does confiscating weapons from the law-abiding conceivably reduce their access to madmen? When did madmen begin obeying gun laws?

These arguments against confiscation have largely resonated with Republicans. Yet — because they feel they must do something — they have fallen for the concept of limited confiscation, known by the euphemism of "red flag" laws.

The concept of a "red flag" law — which permits the confiscation of lawfully owned weapons from a person because of what the person might do — violates both the presumption of innocence and the due process requirement of proof of criminal behavior before liberty can be infringed.

The presumption of innocence puts the burden for proving a case on the government. Because the case to be proven — might the gun owner be dangerous? — if proven, will result in the loss of a fundamental liberty, the presumption of innocence also mandates that the case be proven beyond a reasonable doubt.

The Republican proposal lowers the standard of proof to a preponderance of the evidence — "a more likely than not" standard. That was done because it is impossible to prove beyond a reasonable doubt that an event might happen. This is exactly why the might happen standard is unconstitutional and alien to our jurisprudence.

In 2008, Justice Antonin Scalia wrote for the Supreme Court that the right to keep and bear arms in the home is an individual pre-political right. Due process demands that this level of right — we are not talking about the privilege of a driving a car on a government street — can only be taken away after a jury conviction or a guilty plea to a felony.

The "might happen" standard of "red flag" laws violates this basic principle. The same Supreme Court case also reflects the Kentucky long gun lesson. The people are entitled to own and possess the same arms as the government; for the same reason as the colonists did — to fight off tyrants should they seize liberty or property.

If the government can impair Second Amendment-protected liberties on the basis of what a person might do, as opposed to what a person actually did do, to show that it is doing something in response to a public clamor, then no liberty in America is safe.

Which liberty will the government infringe upon next?

China vs. Trump

While the following is somewhat simplified and ignores some relevant consequences, it provides useful perspective.

China has made an unforced error that makes it possible for the US to gain at China’s expense. Trump can win if he plays it right.

First, Trump put tariffs on US imports from China. Downward sloping demand curves and upward sloping supply curves imply that:
  •  Prices to US consumers of Chinese products rise, but not nearly by the amount of the percentage tariff.
  •  US imports of Chinese products decline.
Suppose the US and China do nothing more. Then over time:
  • Other countries with production costs almost as low as China’s would sell to US consumers at prices only slightly above the US pre-tariff prices of Chinese products.
  • Assuming the other countries’ production rates did not increase, Chinese manufacturers would sell their products to the other countries’ consumers to make up the difference.
The result, over time, would be:
  • US imports of Chinese products would decline substantially.
  • US imports of these products from other countries would rise substantially.
  • US consumer prices of the products involved probably would decline and approach their pre-tariff levels.
  • Tariffs collected by the US would decline substantially.
  • The distribution of production of the products involved across the various countries would not change much.
  • Neither the US nor China would lose or gain much.
But China responded by devaluing its currency relative to the US dollar. Assume the devaluation percentage is the same as the US tariff percentage. Then China has given a free lunch to the US. To see this, consider the simplistic case where a US tariff increases the US consumer price by the percentage tariff and imports of Chinese products is unaffected.

Before China devalues its currency:
  • The price of Chinese products to US consumers has risen by the tariff percentage.
  • US consumers are paying the tariff to the US government and the pre-tariff price to China.
  • The US government is collecting a substantial tariff.
Next:
  • China devalues its currency with respect to the dollar by the tariff percentage.
  • China’s pre-tariff price to US consumers drops by the tariff percentage.
  • US consumers pay the tariff to the US government and the new lower pre-tariff price to China.
  • US consumers net price drops to the original pre-tariff price.
  • Tariffs collected by the US remain substantial.
  • China has subsidized the US government by the amount of the tariffs collected.
  • US consumer prices for Chinese products have not changed.
  • The US is better off and China is worse off.

Monday, August 12, 2019

Aviation Magic

Consider an aircraft flying a clockwise circle at 100 knots airspeed at a constant altitude within an air mass that has a wind speed relative to the ground of 25 knots from 270 degrees. The pilot observes, relative to the air, that nothing changes except the aircraft’s direction of travel.

An object’s kinetic energy is the product of ½ its mass and its squared speed. Kinetic energy is proportional to squared speed.

From the pilot’s perspective, the aircraft’s speed is constant at 100 knots, hence its kinetic energy does not change and is proportional to 100*100 = 10,000.

From a ground observer’s perspective, the aircraft’s speed is 125 knots when its heading is 90 degrees and 75 knots when its heading is 270 degrees. He figures the aircraft’s kinetic energy is proportional to 125*125 = 15,625 on a heading of 90 degrees and 75*75 = 5,625 on a heading of 270 degrees. According to him, the aircraft’s kinetic energy drops by (5625-15625)/15625 = 64% during the turn from 90 degrees to 270 degrees. 64% of the aircraft’s kinetic energy has vanished – where did it go? The ground observer also sees the aircraft’s kinetic energy increase by (15625-5625)/5625 = 178% during the turn from 270 degrees to 90 degrees. Where did this kinetic energy come from?

For every full circle of the aircraft from 90 degrees to 270 degrees and then from 270 degrees to 90 degrees, the ground observer sees the aircraft’s percentage kinetic energy change by -64% followed by +178%. This is an average change in the aircraft’s kinetic energy of (-64+178)/2 = +57%. This corresponds to a net gain of 57% in the aircraft’s squared speed, e.g., from 100*100 to 100*100+0.57*100*100 = 1.57*100*100 = (1.25*100)*(1.25*100), or a gain of 25% in its speed.[1] For example, from 100 knots to 125 knots on the first circle. Evidently, all it takes to achieve high ground speeds when there is a wind is a few circles before setting out on your desired heading.

[1] A gain of 57% is an increase by a factor of (1+0.57) = 1.57. The square root of 1.57 is 1.25.

Tuesday, August 06, 2019

How does China’s stopping imports of US farm products effect US farmers?

If you believe the media and politicians, US farmers lose a dollar of sales for every dollar less of US farm exports to China.

If China reduces its imports of US farm products, the most likely consequence will be its importing of more farm products from other countries to compensate. But that will create a new market for US farm products in the other countries to compensate for the increased exports to China.

The most likely impact, over time, of China reducing its imports of US farm products by a dollar is a loss a dollar of US farm sales to China and an increase of about a dollar of US farm sales to other countries.

If the media and politicians can't get something as simple as this right, imagine how right they are likely to be about their assessment of the implications of their economic proposals.