Friday, March 29, 2019

Judith Curry puts a key climate forecast in perspective

Judith Curry has been Professor and Chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology

Those who think climate change as espoused by the alarmists is "settled science" are wrong.
Here is JC's blog entry concerning an oft quoted climate model and an associated worst-case climate outcome.

Note the ending paragraph:

"Based on this evidence, Ritchie and Dowlatabadi (2017) conclude that RCP8.5 should not be used as a benchmark for future scientific research or policy studies. Nevertheless, the RCP8.5 family of scenarios continues to be widely used, and features prominently in climate change assessments (e.g. CSSR, 2017)."
------------------------------------------------
Most worst-case climate outcomes are associated with climate model simulations that are driven by the RCP8.5 representative concentration pathway (or equivalent scenarios in terms of radiative forcing). No attempt has been made to assign probabilities or likelihoods to the various emissions/concentration pathways (e.g. van Vuuren et al. 2011), based on the argument that the pathways are related to future policy decisions and technological possibilities that are considered to be currently unknown.

The RCP8.5 scenario was designed to be a baseline scenario that assumes no greenhouse gas mitigation and no impacts of climate change on society. This scenario family targets a radiative forcing of 8.5 W m-2 from anthropogenic drivers by 2100, which is nominally associated with an atmospheric CO2concentration of 936 pm (Riahi et al. 2007). Since the scenario outcome is already specified (8.5 W m-2); the salient issue is whether plausible storylines can be formulated to produce the specified outcome associated with RCP8.5.

A number of different pathways can be formulated to reach RCP8.5, using different combinations of economic, technological, demographic, policy, and institutional futures. These scenarios generally include very high population growth, very high energy intensity of the economy, low technology development, and a very high level of coal in the energy mix. Van Vuuren et al. (2011) report that RCP8.5 leads to a forcing level near the 90th percentile for the baseline scenarios, but a literature review at that time was still able to identify around 40 storylines with a similar forcing level.

Storylines for the RCP8.5 scenario and its equivalents have been revised with time as our background knowledge changes. To account for lower estimates of future world population growth and much lower outlooks for emissions of non-CO2 gases, more CO2 must be released to the atmosphere to reach 8.5 W m-2 by 2100 (Riahi et al., 2017). For the forthcoming IPCC AR6, the comparable SSP5-8.5 scenario is associated with an atmospheric CO2 concentration of almost 1100 ppm by 2100 (O’Neill et al. 2016), which is a substantial increase relative to the 936 ppm reported by Riahi et al. (2007).

As summarized by O’Neill et al. (2016) and Kriegler et al. (2017), the SSP5-8.5 baseline scenarios exhibit rapid re-carbonization, with very high levels of fossil fuel use (particularly coal). The plausibility of the RCP8.5-SSP5 family of scenarios is increasingly being questioned. Ritchie and Dowlatabadi (2018) challenge the bullish expectations for coal in the SSP5-8.5 scenarios, which are counter to recent global energy outlooks. They argue that the ‘return to coal’ scenarios exceed today’s knowledge of conventional reserves. Wang et al. (2017) has also argued against the plausibility of the existence of extensive reserves of coal and other easily-recoverable fossil fuels to support such a scenario.

Most importantly, Riahi et al. (2017) found only one single baseline scenario of the full set (SSP5) reaches radiative forcing levels as high as the one from RCP8.5 (compared with 40 cited by van Vuuren et al. 2011). This finding suggests that 8.5 W/m2 can only emerge under a very narrow range of circumstances. Ritchie and Dowlatabadi (2018) notes that further research is needed to determine if plausible high emission reference cases consistent with RCP8.5 could be developed with storylines that do not lead to re-carbonization.

Given the socio-economic nature of most of the assumptions entering into the SSP-RCP storylines, it is difficult to argue that the SSP5-RCP8.5 scenarios are impossible. However, numerous issues have been raised about the plausibility of this scenario family. Given the implausibility of re-carbonization scenarios, current fertility (e.g. Samir and Lutz, 2014) and technology trends, as well as constraints on conventional coal reserves, a categorization of RCP8.5 as ‘borderline impossible’ is justified based on our current background knowledge.

Based on this evidence, Ritchie and Dowlatabadi (2017) conclude that RCP8.5 should not be used as a benchmark for future scientific research or policy studies. Nevertheless, the RCP8.5 family of scenarios continues to be widely used, and features prominently in climate change assessments (e.g. CSSR, 2017).

Tuesday, March 26, 2019

Jonathan Turley provides perspective on Obstruction

Here is Jonathan Turley's blog article on Obstruction and Mueller's assessment of it.

According to JT, Mueller abdicated his responsibility to come to a conclusion on obstruction.  No doubt Mueller's wishy-washy "opinion" on obstruction will lead to gross misrepresentation by some Democrats and the Media and by some Republicans.

I have added some comments in italics and have underlined some key points.
-------------------------------------------------
After millions of dollars and two years of investigation, the summary of the findings of special counsel Robert Mueller is out. First and foremost, Mueller found there was no established conspiracy or collusion between anyone in the Trump campaign and the Russians. Second, Mueller made a curious type of prosecutor “declination” — not declining to prosecute, as he did on collusion, but declining an opinion either way.

That’s right: After more than 2,800 subpoenas, 500 search warrants, interviews with hundreds of witnesses and almost 300 judicial orders for records and surveillance, Mueller simply offered a collective shrug from his team regarding the question of obstruction of justice.

That was understandably not sufficient for Attorney General Bill Barr and Deputy Attorney General Rod Rosenstein, who hold the quaint notion that prosecutors are supposed to reach conclusions. That is what they did in a matter of two days, finding that the evidence was clearly insufficient to establish the crime of obstruction.

Rod Rosenstein is leaving the DOJ and gives the impression that he is not a Trumper.  If he concurs, that is significant.

Special counsels are not supposed to end investigations like “The Sopranos” TV series, leaving it to the viewer to guess whether Tony Soprano survived his final scene. In reality, the answer is abundantly clear. While the evidence of obstruction was always stronger than collusion, it would be a laughable case to bring in a court of law.

The Progressive howling about obstruction is likely to continue.  You can pretty much use such howling as disqualifying behavior for being a Senator, Congressman, or journalist.

Collusion

For two years, legal analysts and politicians convinced voters there was a strong case of collusion-related crimes. The drumbeat of “bombshell” disclosures and “smoking gun” stories was disconcerting and, frankly, disgraceful. At the time of Mueller’s appointment, I wrote that there is no such crime as collusion and little chance that a collusion-related crime like conspiracy could be established. With each new “speaking indictment,” that view was reinforced.

Yet, in this age of rage, people were eager to believe experts explaining that a strong case for criminal charges was already established on collusion and they just had to “wait for Mueller” to set things right. Those of us questioning such analysis were painted as “Trumpers” and apologists, even though we criticized Trump for his conduct and comments.

The plain fact is that fostering the collusion delusion meant windfall ratings and benefits for pundits and politicians alike. The Democrats used the investigation — and the threat of impeachment — as a rallying cry to retake the House. CNN, MSNBC and others made massive profits off the echo-chamber coverage of the deepening collusion case supposedly to be made by Mueller. Indeed, when I detailed the glaring legal flaws in the collusion theory in February 2018, CNN legal analyst and former White House ethics attorney Norm Eisen assured viewers the criminal case for collusion was “devastating” and Trump was “colluding in plain sight.” Eisen recently was hired by the House Judiciary Committee to help direct its investigation of the president.

Anticipate more partisan behavior - with every attempt to distort the facts - by the Democrats.

He was not alone. Cornell Law School Vice Dean Jens David Ohlin declared Donald Trump Jr.’s emails on the infamous Trump Tower meeting were sufficient for criminal charges as “a shocking admission of a criminal conspiracy.” MSNBC legal analyst Paul Butler identified the crime as “conspiring with the U.S.’s sworn enemy to take over and subvert our democracy” and declared that “what Donald Trump Jr. is alleged to have done is a federal crime.”

Even after some of us noted that the absence of additional indictments clearly showed Mueller had not found criminal collusion, many rushed forward to keep the delusion alive. On Sunday, House Intelligence Committee chairman Adam Schiff (D-Calif.) went on CBS’s “Face the Nation” to insist there is ample evidence of collusion — even though not a single person was charged with it. He also insisted Trump might be guilty of collusion but Mueller simply could not indict him under existing Justice Department policies.

That, too, turned out to be untrue. Mueller found no evidence of either Trump or his campaign colluding with the Russians.

Schiff is an example of the kind of person that should not be in Government.

Obstruction

While Mueller can be commended for reaching a conclusion on collusion, his position on obstruction was incomplete and, frankly, irresponsible. Mueller simply says that “While this report does not conclude that the president committed a crime, it also does not exonerate him.”

What does that mean, exactly? Law is not supposed to be an impressionistic art form in which some see a criminal and others see a crank.

Yet, Barr reported that “The Special Counsel… did not draw a conclusion — one way or the other — as to whether the examined conduct constituted obstruction.”

Of course, we all knew there was evidence on both sides of this issue. As Barr notes, most of that evidence already has been publicly discussed. Yet, Mueller seems to have left the public with a lingering Sopranos-like finale of “What do you think?”

If Mueller believed Trump’s conduct constituted obstruction, he had a duty to say so. What makes his position all the more maddening is that the question does not appear a close one. As I have previously discussed in columns, it is possible to charge someone for obstructing an investigation into a nonexistent crime. While it is hard to do, Trump made his best effort with a long series of inappropriate comments and actions viewed as hostile to the investigation.

However, the question of a criminal charge comes down to motivation and state of mind. Trump’s comments on the investigation are not isolated departures from the new normal of his administration. He has ignored widespread calls for restraint in his public statements on subjects ranging from immigration to North Korea to NATO. In these areas, he has incautiously attacked cabinet members and fired a slew of officials, from Secretary of State Rex Tillerson to multiple chiefs of staff.

Mueller concluded that Trump was not hiding collusion but was taking steps that could be viewed as obstructive. That is obvious.

However, to bring a criminal case on the actual crime of obstruction would be absurd. Trump could simply argue that he viewed the collusion investigation to be a deep-state conspiracy and refused to stay quiet about it — but he never fired Mueller, destroyed evidence, or forced an early conclusion to the investigation. Indeed, even fired FBI director James Comey said Trump agreed that the investigation should be allowed to continue, to reach its own conclusions. Trump objected that Comey would not tell the public what he was telling Congress: that Trump was not a target of that investigation.

None of this moves Trump out of legal jeopardy. He just pulled free of the gravitational pull of Jupiter but is now entering the asteroid belt of multiple smaller investigations across a broader expanse. From the Southern District of New York to various congressional committees, these investigations likely will continue for much if not all of his remaining term.

It is not clear if Trump has learned from this experience. He almost “counterpunched” his way into an obstruction prosecution. It also is not clear if many voters have learned the countervailing lesson about rage and reality in analyzing alleged crimes.

For two years, “Wait for Mueller” has become an increasingly desperate mantra. Many viewed Mueller as the anti-Trump who would finally rid them of this meddlesome president.

Well, Mueller has come and Trump is likely to remain. That is the reality, and all the rage in the world is unlikely to change it.

Sunday, March 24, 2019

Climate change Insect pseudo science from the Alarmists

Here is an article by Komonen, Halme, and Kotiaho titled "Alarmist by bad design: Strongly popularized unsubstantiated claims undermine credibility of conservation science".

The amazing fact about the Alarmist Insect paper is how wildly inadequate was the authors' methodology and data and the disconnect between what the authors' methodology and data implied and the authors' conclusions.
--------------------------------------------
“Unless we change our ways of producing food, insects as a whole will go down the path of extinction in a few decades”. This is a verbatim conclusion of the recent paper by Sánchez-Bayoa and Wyckhuys (2019): Worldwide decline of the entomofauna: A review of its drivers. There is also another slightly less sweeping but still bold conclusion: “Our work reveals dramatic rates of decline that may lead to the extinction of 40% of the world’s insect species over the next few decades”. In an interview by Damian Carrington of The Guardian, the authors explained that they are not alarmist, but that they really wanted to wake people up. If measured by the global media attention, they succeeded. A version of their conclusions hit the headlines across the planet in mainstream media such as BBC News, Al-Jazeera, ABC News and USA Today. Unfortunately, even if not intentional, the conclusions of Sánchez-Bayoa and Wyckhuys (2019) became alarmist by bad design: due to methodological flaws, their conclusions are unsubstantiated.

Sánchez-Bayoa and Wyckhuys (2019) set out to review and systematically assess “the changes in species richness (biodiversity) and population abundance though time” and “the likely drivers of the losses” of insects across the globe. The authors searched the online Web of Science database using the keywords [insect*] AND [declin*] AND [survey]. By including the word [declin*], there is a bias towards literature that reports declines, and the bias is not resolved by the procedure in which “additional papers were obtained from the literature references”. If you search for declines, you find declines. Searching for declines would have been appropriate, had the authors only aimed for evaluating the drivers of the declines. In the same vein, the statement “almost half of the species are rapidly declining” is unsubstantiated, as there are no data about the speed of the decline. Furthermore, the data are not extensive geographically (as the authors acknowledge) or taxonomically, so the conclusion that the current proportion of insect species in decline would be 41%, or that insects as a whole would be going extinct, are also unsubstantiated.

Our second criticism concerns the mismatch between the study objectives and the actual studies included. The authors state “Reports that focused on individual species...were excluded” and “We selected surveys that… were surveyed intensively over periods longer than 10 years”. Why, then, did they include a single-species study on Formica aquilonia which was conducted over four years only (see Sorvari and Hakkarainen 2007)? We did not scrutinize all the reviewed studies but just happened to be familiar with this one. Because Sánchez-Bayoa and Wyckhuys (2019) lumped together single species studies and continent-wide data sets, as well as primary field studies, various reports and expert opinions like the national IUCN Red Lists, analyses and interpretations were challenging. In fact, many of the “extinctions” in the reviewed papers apparently represent losses of species from individual sites or regions, and it is not straightforward to extrapolate to the extinction of species at larger spatial scales (see also Thomas et al. 2019). The extrapolation is also challenging because the study included only cases with detected declines.

Our third criticism concerns the misuse of the IUCN Red List categories (citation for IUCN 2009 is actually missing from the references) to assess extinction risk. At least in one case (McGuinness 2007), Sánchez-Bayoa and Wyckhuys (2019) lumped together species in the category ‘Data Deficient’ and ‘Vulnerable’. Because by definition there are no data for Data Deficient species to assess neither the decline nor the range size or population abundance, this means that the authors themselves designated a 30% decline (Vulnerable indicates > 30% decline) for Data Deficient species. This is not trivial, since 24% of the Vulnerable species were actually Data Deficient in McGuinness (2007). The use of the IUCN criteria is also poorly described. Did the authors solely use the number of threatened species as presented in the original articles, or did they also themselves designate declining species to different IUCN categories (not all countries follow the IUCN system)? And if the latter, did they consider the fact that the IUCN criteria assumes the decline has happened in ten years or three generations, whichever is the longer.

Putting the unsubstantiated claims about the extent of insect declines aside, there may also be a methodological complication regarding the drivers, because of the chosen indicator. The authors base their inference about the importance of the driver on the number or share of the papers where the driver is reported to have caused the declines. Number of reports is not a reliable indicator of the importance of the driver as it can simply reflect the interest of scientists or ease of studying certain drivers. More reliable conclusions about the importance of different drivers would have required reviewing also the drivers in studies without declines. Vote counting as conducted here, provides only limited, if any, information about the strength of the driver, which would be of interest for the conservation managers. Ideally, a formal meta-analysis with effect sizes of different drivers, and an unbiased sample of population trend studies including positive, negative and no effect would have provided a more complete picture of the declines and their relative strengths.

The final problematic issue with the paper is its strong language. Like noted by The Guardian, the conclusions of the paper were set out in unusually forceful terms for a peer-reviewed scientific paper. The text is rich in non-scientific intensifiers such as dramatic, compelling, extensive, shocking, drastic, dreadful, devastating, and others. This language is clearly reflected by the media with direct quotes, and with what media often does, by adding on to the already intensifier rich text. Exaggerated news made by the media itself are bad as they are, but similar exaggerations in the original scientific papers should not be acceptable. The current case has already seen corrections and withdrawals in the print media as well as in social media, and the first academic responses have been published (e.g. Thomas et al. 2019). As actively popularizing conservation scientists, we are concerned that such development is eroding the importance of the biodiversity crisis, making the work of conservationists harder, and undermining the credibility of conservation science.

Friday, March 22, 2019

A lesson how to subvert the Constitution from the New York State Attorney General and the Manhattan District Attorney

Here is a column by Jonathan Turley.

Another example of the lack of ethical behavior by your top elected officials.

Justice? Humbug.
--------------------------------------------
This month, the greatest off Broadway production should be titled “The Prosecutors,” starring Manhattan District Attorney Cyrus Vance Jr. and New York Attorney General Letitia James. As in another dramatic comedy, “The Producers,” the state case against former Donald Trump campaign manager Paul Manafort seems designed to fail, leaving its prosecutors with the convenient windfall of public support and none of the burden.

The New York state charges that Vance filed against Manafort appear to run afoul of state and federal protections against double jeopardy, or being prosecuted twice for the same underlying conduct. The timing of the charges alone seemed right out of the playbook of “The Producers” character Max Bialystock, the corrupt Broadway figure who insisted that in New York the rule is “if you got it, flaunt it, flaunt it.” Accordingly, Vance waited just minutes after the Manafort sentencing to hit him with state charges, guaranteeing the maximum exposure and credit for his effort.

The problem is that the case appears not only constitutionally flawed but ethically challenged, coming right out of the Max Bialystock School of Prosecution. I have long been one of the longest and loudest critics of Manafort. He is a corrupt and despicable person who deserves the two sentences that could keep him in jail for the rest of his life. However, it is not his crimes but his association with President Trump that has driven the manic effort to charge him in New York. In this current age of rage over Trump, Manafort is a readily available surrogate for selective prosecution.

For more than a year, leading New York state prosecutors have openly pledged to get Manafort on some undefined crime to prevent Trump from releasing him from jail on a presidential pardon. They promised to find crimes that could be alleged in the state system, which would not at all be impacted by a presidential pardon. To do that, they only had to strip all citizens of certain rights. Former New York Attorney General Eric Schneiderman pushed the legislature to rescind a core protection against double jeopardy to allow him to charge Manafort on the same criminal conduct that he would later be sentenced for in federal court.

New York is one of those states with its own protections against such abusive and duplicative charges. When Schneiderman was forced out of office for alleged sexual assault, his cause was picked up by his successor, Barbara Underwood, who has deemed the constitutional protection a loophole that would “thwart the cause of justice rather than advance it.” You heard that right, a constitutional protection would “thwart justice” because it could be used by an unpopular individual such as Manafort.

These calls were then picked up by Vance and James, who promised to get Manafort at any cost. James actually campaigned for and was elected to the attorney general post in part on her effort to reduce constitutional protections for everyone in order to get one man. Now Vance has fulfilled his pledge and charged Manafort in New York. It is a striking contrast here that Scheiderman was allowed to walk on sexual assault charges because prosecutors determined that some punching and slapping without any consent is allowed for “sexual gratification.” Yet, Manafort was charged on essentially the same alleged fraudulent conduct as in his federal case.

When I read the complaint against Manafort, I was struck not only by the overlap but the overkill. I had never seen the Manhattan district attorney bring such a case, but I could be mistaken. After all, Vance proclaimed he had a sacred duty to protect the “integrity of our residential mortgage market.” That was news in itself. The core allegation was that Manafort lied about a condo being used as a home by his family, as opposed to a rental property. If that type of misrepresentation were truly prosecuted with vigor, New York would be a ghost town. In the land of rent controlled apartments, fraudulent practices are the norm. Indeed, Aaron Carr of the Housing Rights Initiative, a nonprofit housing watchdog group, declared recently that in New York “rent fraud is like finding rain in a rain storm.”

Given the absence of past serious criminal prosecutions, I reached out to the office of Vance with a simple request: Could he show me other cases like this where he prosecuted people like Manafort to uphold the integrity of the residential mortgage market? After repeated attempts, the office declined to respond. I then searched New York cases on Lexis Nexis, which contains all published opinions, and found only a handful of opinions on mortgage fraud in New York and nothing on point from Vance. While there are just the published opinions, there is no evidence of his focus on misrepresentation of rental properties before the Trump era.

What emerges is a picture that should trouble everyone who values blind and fair justice. Vance took the same underlying conduct from the federal cases to recharge Manafort then used the same conduct over and over to pile up 16 counts for mortgage fraud and falsifying business records. Most of the counts are built around defrauding Citizens Bank, leaving just four or five more charges pertaining to his involvement with a second bank. It is unclear whether “Lender #1” in many of the New York state charges is Citizens Bank, but it sure looks like those state charges are related to the Citizens Bank loan with “Lender #1” featured in the federal prosecution.

Manafort was convicted of defrauding the bank by securing a $3.4 million loan for his New York condo by saying that his family was living in it rather than renting it. However, James and others never succeeded in stripping the New York state constitution of the core protections related to double jeopardy. Vance charged him anyway, apparently following the Bialystock script by making a splashy premiere and satisfying voters who want to see this one individual prosecuted in the most selective and grandiose way.

Equally glaring is the absence of prosecuting Michael Cohen. Vance does not seem concerned over the “integrity” of the markets for the long list of criminal acts committed by Cohen in New York. Indeed, the Cohen case was transferred to New York because they primarily concerned that city and its markets. Yet, Cohen now represents a direct threat to Trump, so he is untouchable to prosecutors. Although Cohen received a ridiculously low sentence, neither Vance nor James have any expressed interest in bringing state charges for his fraud related to banks and taxi medallions. That would not be nearly as popular a production. Ironically, Cohen has criminal allegations that would not be barred under double jeopardy.

The greatest danger is not that Vance will fail on constitutional grounds but that he may succeed. If Vance can single out an unpopular individual and convict him again on the same crimes, he would eviscerate the core protections in New York. Worse, Vance and James fed a public appetite for selective state prosecution that will only become insatiable if successful.

The best that can be said here is that Vance may view this is a just a stunt unlikely to get to trial. However, it is a dangerous game, as shown by the Bialystock character that was ruined when his production of “Springtime for Hitler” became a hit. It left Bialystock with a lament many in New York soon could be voicing, “How could this happen? I was so careful. I picked the wrong play, the wrong director, the wrong cast. Where did I go right?”


Monday, March 18, 2019

Pseudo Science - the Tool of the Alarmists

Here is a portion of a column by Matt Ridley.

MR is on target.  You cannot trust Alarmists.
--------------------------------------
‘The whole aim of practical politics,’ wrote H.L. Mencken, ‘is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary.’ Newspapers, politicians and pressure groups have been moving smoothly for decades from one forecast apocalypse to another (nuclear power, acid rain, the ozone layer, mad cow disease, nanotechnology, genetically modified crops, the millennium bug…) without waiting to be proved right or wrong.

Increasingly, in a crowded market for alarm, it becomes necessary to make the scares up. More and more headlines about medical or environmental panics are based on published scientific papers, but ones that are little more than lies laundered into respectability with a little statistical legerdemain. Sometimes, even the exposure of the laundered lies fails to stop the scare. Dr Andrew Wakefield was struck off in 2010 after the General Medical Council found his 1998 study in the Lancet claiming a link between the MMR vaccine and autism to be fraudulent. Yet Wakefield is now a celebrity anti-vaccine activist in the United States and has left his long-suffering wife for the supermodel Elle Macpherson. Anti-vax campaigning is a lucrative business.

Meanwhile, the notion that chemicals such as bisphenol A, found in plastics, are acting as ‘endocrine disruptors’, interfering with human hormones even at very low doses, started with an outright fraudulent study that has since been retracted. Many low-quality studies on BPA have pushed this theory, but they have been torpedoed by high-quality analyses including a recent US government study called Clarity. Yet this is of course being largely ignored by the media and the activists.

So the habit of laundering lies is catching on. Three times in the past month, pseudo-science flew around the world before the scientific truth had got its boots on (as Mark Twain did not say, but Jonathan Swift almost did): in stories about insect extinction, weedkiller causing cancer, and increased flooding. The shamelessness of the apocaholics is increasingly blatant. They know that even if a story of impending doom is thoroughly debunked, the correction comes too late. The gullible media will have relayed the headline without checking, so the activists have made their fake-news hit, perhaps even raised funds on the back of it, and won.

Take the story on 10 February that ‘insects could vanish within a century’, as the Guardian’s Damian Carrington put it, echoed by the BBC. The claim is, as even several science journalistsand conservationists have now reported, bunk.

The authors of the study, Francisco Sánchez-Bayo and Kris Wyckhuys, claimed to have reviewed 73 different studies to reach their conclusion that precisely 41 per cent of insect species are declining and ‘unless we change our way of producing food, insects as a whole will go down the path of extinction in a few decades’. In fact the pair had started by putting the words ‘insect’ and ‘decline’ into a database, thereby ignoring any papers finding increases in insects, or no change in numbers.

They did not check that their findings were representative enough to draw numerical conclusions from. They even misinterpreted source papers to blame declines on pesticides, when the original paper was non-committal or found contradictory results. ‘Several multivariate and correlative statistical analyses confirm that the impact of pesticides on biodiversity is larger than that of other intensive agriculture practices,’ they wrote, specifically citing a paper that actually found the opposite: that insect abundance was lower on farms where pesticide use was less.


They also relied heavily on two now famous recent papers claiming to have found fewer insects today than in the past, one in Germany and one in Puerto Rico. The first did not even compare the same locations in different years, so its conclusions are hardly reliable. The second compared samples taken in the same place in 1976 and 2012, finding fewer insects on the second occasion and blaming this on rapid warming in the region, rather than any other possible explanation, such as timing of rainfall in the two seasons. Yet it turned out that there had been no warming: the jump in temperature recorded by the local weather station was entirely caused by the thermometer having been moved to a different location in 1992. Whoops.

Of course, human activities do affect insects, but ecologists I have consulted say local populations of some species are often undergoing huge changes, and that some species regularly die out in one location and are then regenerated by migrants. This is not to be confused with species extinction. The real evidence suggests that insect species are dying out at a similar rate to mammals and birds — which means about 1 to 5 per cent per century. A problem, but not Armageddon.

Curiously, 41 per cent cropped up in another misleading story the same day, 10 February. This is the claim that exposure to glyphosate, the active ingredient in Roundup weedkiller, increases the incidence of a particular, very rare cancer, non-Hodgkin lymphoma (NHL). ‘Exposure to weed-killing products increases risk of cancer by 41 per cent,’ said the Guardian’s headline.

Once again, this paper is not a new study, but a desktop survey of other studies and its claim collapses under proper scrutiny. According to the epidemiologist Geoffrey Kabat, the paper combined one high-quality study with five poor-quality studies and chose the highest of five risk estimates reported in one of the latter to ensure it would reach statistical significance. The authors highlighted the dubious 41 per cent result, ‘which they almost certainly realised would grab headlines and inspire fear’.

The background is important here. Vast sums of money are at stake. ‘Predatort’ lawyers have been chasing glyphosate in the hope of tobacco-style payouts. Unluckily for them, however, study after study keeps finding that glyphosate does not cause cancer. The US Environmental Protection Agency, the European Food Safety Authority, the UN’s Food and Agriculture Organisation working with the World Health Organisation, the European Chemicals Agency, Health Canada and the German Federal Institute for Risk Assessment have all tried and failed to find any cancer risk in glyphosate.

The only exception is the International Agency for Research on Cancer (IARC), a rogue United Nations agency that has been taken over by environmental activists, which claimed that neat glyphosate was capable of causing cancer in animals if ingested. By the same criteria, IARC admits, coffee, tea and wine (which are indeed ingested) and working as a hairdresser are also carcinogenic; in fact, out of 1,000 substances and other risks tested, IARC has found only one to be non-carcinogenic. The IARC study also did the usual pseudo-science thing of citing some results but was reported by Reuters to have discounted contradictory results from the same studies.

This is what Reuters reported:

The edits identified by Reuters occurred in the chapter of IARC’s review focusing on animal studies. This chapter was important in IARC’s assessment of glyphosate, since it was in animal studies that IARC decided there was “sufficient” evidence of carcinogenicity. 

One effect of the changes to the draft, reviewed by Reuters in a comparison with the published report, was the removal of multiple scientists' conclusions that their studies had found no link between glyphosate and cancer in laboratory animals.

Following that claim, another study by the Agricultural Health Survey of 45,000 people actually exposed to glyphosate again found no association between glyphosate and any cancer, including NHL. Nobody outside the predatort industry takes the IARC finding seriously.

Nonetheless, the study had a beneficial effect for lawyers. Last year, citing the IARC study but not its debunking, a jury in California awarded a $289 million jackpot to the family of a school groundskeeper who died of NHL. Meanwhile, an investigation by Reuters found that the conclusion of the IARC study had been altered shortly before the report’s release and that the specialist consulted, Christopher Portier, started working with law firms suing Monsanto soon afterwards. Another case is due to start shortly, this time in federal court. More than 9,300 people with various cancers have filed similar cases.

Reuters again:

Documents seen by Reuters show how a draft of a key section of the International Agency for Research on Cancer's (IARC) assessment of glyphosate - a report that has prompted international disputes and multi-million-dollar lawsuits - underwent significant changes and deletions before the report was finalised and made public.
One effect of the changes to the draft, reviewed by Reuters in a comparison with the published report, was the removal of multiple scientists' conclusions that their studies had found no link between glyphosate and cancer in laboratory animals.

In one instance, a fresh statistical analysis was inserted - effectively reversing the original finding of a study being reviewed by IARC.

In another, a sentence in the draft referenced a pathology report ordered by experts at the U.S. Environmental Protection Agency. It noted the report “firmly” and “unanimously” agreed that the “compound” – glyphosate – had not caused abnormal growths in the mice being studied. In the final published IARC monograph, this sentence had been deleted.

See also David Zaruk:

During the same week that IARC had published its opinion on glyphosate’s carcinogenicity, Christopher Portier signed a lucrative contract to be a litigation consultant for two law firms preparing to sue Monsanto on behalf of glyphosate cancer victims.

This contract has remunerated Portier for at least 160,000 USD (until June, 2017) for initial preparatory work as a litigation consultant (plus travel).

This contract contained a confidentiality clause restricting Portier from transparently declaring this employment to others he comes in contact with. Further to that, Portier has even stated that he has not been paid a cent for work he’s done on glyphosate.

It became clear, in emails provided in the deposition, that Portier’s role in the ban-glyphosate movement was crucial. He promised in an email to IARC that he would protect their reputation, the monograph conclusion and handle the BfR and EFSA rejections of IARC’s findings.

Portier admitted in the deposition that prior to the IARC glyphosate meetings, where he served as the only external expert adviser, he had never worked and had no experience with glyphosate.


Talking of payouts, the third inexactitude to fly around the world two days later was the claim by the left-leaning political thinktank the Institute for Public Policy Research (IPPR) that, ‘Since 2005, the number of floods across the world has increased by 15 times’, which was directly quoted by the BBC’s Roger Harrabin, in the usual headline-grabbing story about how we are all doomed.

This was (to borrow a phrase from Sir Nicholas Soames) ocean-going, weapons-grade, château-bottled nonsense. There has been no increase in floods since 2005, let alone a 15-fold one. When challenged, IPPR said it was a ‘typo’, and that it meant since 1950. Well, that is nonsense, too. The Intergovernmental Panel on Climate Change regularly reviews data on floods and says it can find no trend: ‘In summary, there continues to be a lack of evidence and thus low confidence regarding the sign of trend in the magnitude and/or frequency of floods on a global scale.’

Fortunately, the IPPR gave a source for its absurd claim. This was ‘GMO Analysis of EM-DAT 2018’. Paul Homewood, a private citizen who regularly catches climate alarmists out, explained in a blog what this meant. EM-DAT is a database of disasters that is wholly worthless as a source for such a claim, as it admits, because it only includes very small disasters such as traffic accidents but only for recent years. There is no evidence here of a trend at all.

GMO is a big Boston asset management firm, whose founder and owner, Jeremy Grantham, just happens — you guessed it — to fund the Institute for Public Policy Research.

In the old days, investigative journalists would be all over this: a billionaire funding a pressure group that issues a press release that quotes the billionaire making a Horlicks of science but that nonetheless gets amplified, helping the pressure group attract more funds. But journalists’ budgets have been cut, and it’s easier to rewrite press releases.

Some people are willing to forgive exaggeration and error if it is in a good cause, like increasing concern about plastics or climate change. This is a risky strategy because it encourages a Trump-like refusal to believe evidence even when that evidence is good. If we use up our energies panicking about phantom hobgoblins, we might have none left for the real scares: the over-fishing of the oceans, the effect of invasive alien species on island wildlife and the fact that polychlorinated biphenyls (PCBs), once used in the electrical industry but long since banned, still exist in high enough concentrations in British waters to prevent killer whales from breeding.

The Sun's Role in Climate Change

Here is a link to Henrik Svensmark's recent paper "Force Majeure - The Sun's Role in Climate Change".

Those who are fond of using the phrase "climate change denier" have attacked HS in the past and probably will now.  I think HS's work likely has much to do with the truth.

It seems likely to me that the "official" view of climate change does not reflect the limitations of both the popular models and data.  Do you think the models are "complete"?  Are they statistical, mathematical (e.g., differential equation based"?  If based on differential equations, are they linear or non linear?  If non-linear, which is likely to be closer to the truth, how come you don't hear about "strange attractors"?

Here are some excerpts from HS's paper.
-------------------------------------------
EXECUTIVE SUMMARY
Over the last twenty years there has been good progress in understanding the solar influence on climate. In particular, many scientific studies have shown that changes in solar activity have impacted climate over the whole Holocene period (approximately the last 10,000 years). A well-known example is the existence of high solar activity during the Medieval Warm Period, around the year 1000 AD, and the subsequent low levels of solar activity during the cold period, now called The Little Ice Age (1300–1850 AD). An important scientific task has been to quantify the solar impact on climate, and it has been found that over the elevenyear solar cycle the energy that enters the Earth’s system is of the order of 1.0–1.5 W/m2. This is nearly an order of magnitude larger than what would be expected from solar irradiance alone, and suggests that solar activity is getting amplified by some atmospheric process. Three main theories have been put forward to explain the solar–climate link, which are: • solar ultraviolet changes • the atmospheric-electric-field effect on cloud cover • cloud changes produced by solar-modulated galactic cosmic rays (energetic particles originating from inter stellar space and ending in our atmosphere). Significant efforts has gone into understanding possible mechanisms, and at the moment cosmic ray modulation of Earth’s cloud cover seems rather promising in explaining the size of solar impact. This theory suggests that solar activity has had a significant impact on climate during the Holocene period. This understanding is in contrast to the official consensus from the Intergovernmental Panel on Climate Change, where it is estimated that the change in solar radiative forcing between 1750 and 2011 was around 0.05 W/m2, a value which is entirely negligible relative to the effect of greenhouse gases, estimated at around 2.3 W/m2. However, the existence of an atmospheric solar-amplification mechanism would have implications for the estimated climate sensitivity to carbon dioxide, suggesting that it is much lower than currently thought. In summary, the impact of solar activity on climate is much larger than the official consensus suggests. This is therefore an important scientific question that needs to be addressed by the scientific community.

INTRODUCTION
The Sun provides nearly all the energy responsible for the dynamics of the atmosphere and oceans, and ultimately for life on Earth. However, when it comes to the observed changes in our terrestrial climate, the role of the Sun is not uniformly agreed upon. Nonetheless, in climate science an official consensus has formed suggesting that the effect of solar activity is limited to small variations in total solar irradiance (TSI), with insignificant consequences for climate. This is exemplified in the reports of Working Group I of the Intergovernmental Panel on Climate Change (IPCC), who estimate the radiative forcing on climate from solar activity between 1750 and 2011 at around 0.05 W/m2. This value is entirely negligible compared to changes in anthropogenic greenhouse gases, whose forcing is estimated at around 2.3 W/m2. 1 The aim of this report is to give a review of research related to the impact of solar activity on climate. Contrary to the consensus described above, there is abundant empirical evidence that the Sun has had a large influence on climate over the Holocene period, with temperature changes between periods of low and high solar activity of the order of 1–2 K. Such large temperature variations are inconsistent with the consensus and herald a real and solid connection between solar activity and Earth’s climate. The question is: what is the mechanism that is responsible for the solar–climate link? A telling result is given by the energy that enters the oceans over the 11-year solar cycle, which is almost an order of magnitude larger (∼1–1.5 W/m2) than the corresponding TSI variation (∼0.2 W/m2). Solar activity is somehow being amplified relative to the TSI variations by a mechanism other than TSI. There are other possible drivers of these changes: solar activity also manifests itself in components other than TSI. These include large relative changes in its magnetic field, the strength of the solar wind (the stream of charged particles that carries the magnetic field), modulation of cosmic ray ionisation in the Earth’s atmosphere, and the amount of ultraviolet (UV) radiation, to name a few. All of these are part of what is referred to as ‘solar activity’, and all have been suggested to influence climate as well. In particular, it will be shown that a mechanism has been identified that can explain the observed changes in climate, and which is supported by theory, experiment and observation. This report is not meant to be an exhaustive representation of all the published papers related to a solar influence on Earth’s climate, but aims to give a clear presentation of the current knowledge on the link between solar activity and climate. A comprehensive review of the Sun’s impact on climate was published previously, 2 but is now eight years old; important progress on the mechanism linking solar activity and climate has been made since. Technical material will not be included in the report, but rather reference will be made to the literature in the field so that the interested reader can find further information.

COSMIC RAY CLOUDS MECHANISM


Another possible mechanism is changes to Earth’s cloud cover due to solar modulation of cosmic rays. 50–52 In 1996, satellite observations showed that Earth’s cloud cover changed by around 2%, in phase with changes in cosmic rays, over a solar cycle. Such a variation corresponds to a change in radiative forcing of around 1 W/m2, which would be in agreement with the observed changes in energy entering the oceans (see Figure 9). The fundamental idea is that cosmic ray ionisation in the atmosphere is important for the formation and growth of small aerosols into CCN, which are necessary for the formation of cloud droplets and thereby clouds. Changing the number density of CCN changes the cloud microphysics, which in turn changes both the radiative properties and the lifetime of clouds (see Figure 12). There is now theoretical, experimental and observational evidence to support the cosmic ray–cloud link, although it should be mentioned that satellite observations of cloud changes on 11-year timescales are by no means entirely reliable due to inherent calibration problems. However, in support of the theory, the whole link from solar activity, to cosmic ray ionisation to aerosols to clouds, has been observed in connection with Forbush decreases on timescales of a week. The cosmic ray variations in response to the stronger Forbush decreases are of similar size to the variations seen over the 11-year solar cycle and result in a change in cloud cover of approximately 2%. Cloud variations are one of the most difficult and uncertain features of the climate system, and therefore cosmic rays and their effect on clouds will add important new understanding of this area. There have been attempts to include the effect of ionisation on the nucleation of small aerosols in large numerical models, but important physical processes are missing. Although there are uncertainties in all of the above observations, they collectively give a consistent picture, indicating an effect of ionisation on Earth’s cloud cover, which in turn can strongly influence climate and Earth’s temperature. Nonetheless, the idea of a cosmicray link to climate has been questioned, and can still give rise to debate. But as more data from observations and experiments are obtained, the case for the link has only become stronger. For example, if the cosmic ray–climate link is real, then any variation of the cosmic ray flux, including those which have nothing to do with solar activity, will translate into changes in the climate as well. Over geological timescales, large variations in the cosmic ray flux arise from the changing galactic environment around the solar system. A comparison between reconstructions of the cosmic ray flux and climate over these long timescales demonstrates that, over the past 500 million years, ice ages have arisen in periods when the cosmic ray flux was high, as the theory predicts. Even the solar system’s movement in and out of the galactic plane can be observed in the climate record.

CONCLUSION
Over the last 20 years, much progress has been made in understanding the role of the Sun in the Earth’s climate. In particular, the frequent changes between states of low and high solar activity over the last 10,000 years are clearly seen in empirical climate records. Of these climate changes, the best known are the Medieval Warm Period (950–1250 AD) and the Little Ice Age (1300–1850 AD), which are associated with a high and low state of solar activity, respectively. The temperature change between the two periods is of the order of 1.0–1.5 K. This shows that solar activity has had a large impact on climate. The above statement is in direct contrast to the IPCC, which estimates the solar forcing over the 20th century as only 0.05 W/m2, which is too small to have a climatic effect. One is therefore left with the conundrum of not having an explanation for the difference in climate between the Medieval Warm Period and Little Ice Age. But this result is obtained by restricting solar activity to only minute changes in total solar irradiance. There are other mechanisms by which solar activity can influence climate. One mechanism is based on changes in solar UV radiation. However, the conclusion seems to be that the effect of UV changes is too weak to explain the energy that enters the oceans over the solar cycle. In contrast, the amplification of solar activity by cosmic ray ionisation affecting cloud cover has the potential to explain the observed changes. This mechanism is now supported by theory, experiment, and observations. Sudden changes in cosmic ray flux in connection with Forbush decreases allow us to see the changes in each stage along the chain of the theory: from solar activity, to ionisation changes, to aerosols, and then to cloud changes. In addition, the impact of cosmic rays on the radiative budget is found to be an order of magnitude larger than the TSI changes. Additional support for a cosmic ray–climate connection is the remarkable agreement that is seen on timescales of millions and even billions of years, during which the cosmic ray flux is governed by changes in the stellar environment of the solar system; in other words, it is independent of solar activity. This leads to the conclusion that a microphysical mechanism involving cosmic rays and clouds is operating in the 20 Earth’s atmosphere, and that this mechanism has the potential to explain a significant part of the observed climate variability in relation to solar activity. An open question is how large secular changes in total solar irradiance can be. Current estimates range from 0.1% to outlier estimates of 0.5%; the latter would be important for climate variation. A small TSI variation, on the other hand, would mean that TSI is not responsible for climate variability. Perhaps future observations will be able to constrain TSI variability better. Climate science in general is, at present, highly politicised, with many special interests involved. It should therefore be no surprise that the above conclusion on the role of the Sun in climate is strongly disputed. The core problem is that if the Sun has had a large influence over the Holocene period, then it should also have had a significant influence in the 20th century warming, with the consequence that the climate sensitivity to carbon dioxide would be on the low side. The observed decline in solar activity would then also be responsible for the observed slowing of warming in recent years. Needless to say, more research into the physical mechanisms linking solar activity to climate is needed. It is useless to pretend that the problem of solar influence has been solved. The single largest uncertainty in determining the climate sensitivity to either natural or anthropogenic changes is the effect of clouds, and research into the solar effect on climate will add significantly to understanding in this area. Such efforts are only possible by acknowledging that this is a genuine and important scientific problem and by allocating sufficient research funds to its investigation.

Friday, March 15, 2019

Don Boudreaux gets it right on minimum wage laws

Here is Don's blog entry.

DB is on target.

The Do Gooders advocating minimum wage laws are not doing good. Not surprising - most of the time they either do not know the consequences of what they want or have ulterior motives that benefit themselves, not those they claim will be benefited.

Here is DB's blog entry.
-----------------------------------------
Driving earlier today to meet Russ Roberts for lunch, I heard on WTOP news radio a report on an effort underway in Maryland’s legislature to deny to low-skilled workers the ability to compete for jobs by offering to work at hourly wages below $15 – that is, as this effort is more commonly (if misleadingly) described, to raise that state’s minimum wage to $15 per hour.

The reporter of course noted that proponents of this minimum-wage diktat insist that it will help low-skilled workers. When reporting on opposition to this minimum-wage diktat – including from Maryland’s governor, Larry Hogan – the report said that “opponents say it will be too costly for businesses.”

Gov. Hogan and other opponents of this minimum-wage diktat probably did indeed say that the reason to oppose this legislation is that it will be too costly for businesses. But however real, however costly, and however unjust are such legislated higher costs for businesses, the effects of minimum wages on businesses are not the principal reason for opposing such legislation. The principal reason to oppose minimum-wage legislation is the fact that it reduces the employment opportunities open to low-skilled workers. It’s minimum-wages’ negative effect on workers, not on businesses, that should be mentioned first and foremost when discussing reasons why such legislation should not only not be enacted, but repealed where it exists.

Nearly all popular-media reporting on minimum-wage legislation reports as this legislation’s only downside the fact that minimum wages raise costs incurred by businesses. Such reporting – by ignoring the main and most sympathetic victims of minimum-wage legislation, namely low-skilled workers – falsely portrays the issue as one in which workers are pitted against business owners. If the reporting were more accurate and complete – if the reporting consistently noted that the main damage inflicted by minimum-wage legislation is inflicted on low-skilled workers – the general public would better understand that minimum-wage legislation really pits workers against workers: workers whose incomes rise as a result, against other workers rendered unemployable or whose hours of employment fall.

In short, minimum-wage legislation isn’t so much anti-business as it is anti-lowest-skilled workers.

Thursday, March 07, 2019

Hurricanes & climate change: recent U.S. landfalling hurricanes

Here is a link to Judith Curry's blog article "Hurricanes & climate change: recent U.S. landfalling hurricanes".

As Curry points out, the basis for attribution to anthropological warming is includes models and theory that are sufficiently "incomplete" such that statistical conclusions are a joint test of the models, theory, and data.  Since we know that the models and theory are incomplete, there is little basis for attribution to anthropological warming with high confidence.

The message is that the Alarmists are Alarmists - having little reason for their claims.

Some excerpts:
----------------------------------
6.1 Detection and attribution of extreme weather events

Given the challenges to actually detecting a change in extreme weather events owing to the large impact of natural variability, the detection step is often skipped and attribution arguments are made, independent of detection. There are two general types of extreme event attribution methods that do not rely on detection: physical reasoning and fraction of attributable risk (NCA4, 2017),

The fraction of attributable risk approach examines whether the odds of occurrence of a type of extreme event have changed. A conditional approach employs a climate model to estimate the probability of occurrence of a weather or climate event within two climate states: one state with anthropogenic influence and the other state without anthropogenic influence (pre-industrial conditions). The “Fraction of Attributable Risk” framework examines whether the odds of some threshold event occurring have been increased due to manmade climate change.

Participants at the 2012 Workshop on Attribution of Climate-related Events at Oxford University questioned whether extreme event attribution was possible at all, given the inadequacies of the current generation of climate models (Nature, 2012):

“One critic argued that, given the insufficient observational data and the coarse and mathematically far-from-perfect climate models used to generate attribution claims, they are unjustifiably speculative, basically unverifiable and better not made at all.”
-----
6.2 Hurricane Sandy

Hurricane Sandy made landfall on 10/22/12 near Atlantic City, NJ. Hurricane Sandy’s most substantial impact was a storm surge. The highest measured storm surge from Sandy was 9.4 feet (at The Battery)[2]. The argument is that human-caused global warming worsened the storm surge because of sea level rise.

Curry (2018a) summarized sea level rise at The Battery. Sea level has risen 11 inches over the past century (Figure 6.1), with almost half of this sea level rise caused by subsidence (sinking of the land). Kemp et al. (2017) found that relative sea level in New York City rose by ~1.70 meters [5.5 feet] since ~575 A.D. A recent acceleration in sea level rise between 2000 and 2014 has been attributed to an increase in the Atlantic Multidecadal Oscillation and southward migration of the Gulf Stream North Wall Index. The extent to which manmade warming is accelerating sea level rise remains disputed (as summarized by Curry, 2018a).

The 2017 U.S. Climate Change Special Report (NCA4, 2017) evaluated published analyses seeking to attribute aspects related to Hurricane Sandy to human-caused global warming: e.g. sea surface temperatures, atmospheric temperatures, atmospheric moisture, and hurricane size. The analysis concluded:

“In summary, while there is agreement that sea level rise alone has caused greater storm surge risk in the New York City area, there is low confidence on whether a number of other important determinants of storm surge climate risk, such as the frequency, size, or intensity of Sandy-like storms in the New York region, have increased or decreased due to anthropogenic warming to date.”
-----
6.3 Hurricane Harvey

Several publications based on model simulations have concluded that as much as 40% of the rainfall from hurricane Harvey was caused by human-caused global warming (Emanuel 2017; Risser and Wehner 2017).

The rationale for these assessments was that prior to the beginning of northern summer of 2017, sea surface temperatures in the western Gulf of Mexico exceeded 30 oC [86 oF] and ocean heat content was the highest on record in the Gulf of Mexico (Trenberth et al. 2017). However, El Niño–Southern Oscillation (ENSO) and Atlantic circulation patterns contributed to this heat content, and hence it is very difficult to separate out any contribution from human-caused global warming.

Landsea (2017) summarizes the arguments for more rainfall from tropical cyclones traveling over a warmer ocean. Intuitively, rainfall from hurricanes might be expected to increase with a warmer ocean, as a warmer atmosphere can hold more moisture. Simple thermodynamic calculations suggest that the amount of rainfall in the tropical latitudes would go up about 4% per oF [7% per oC] sea surface temperature increase. Examining a 300 mile radius circle for nearly all of the rain implies that about 10% more total hurricane rainfall for a warming of 2-2.5 F [1-1.5 C]. The Gulf of Mexico has warmed about 0.7 oF [0.4 oC] in the last few decades. Assuming that all of this warming is due to manmade global warming suggests that roughly 3% of hurricane rainfall today can be reasonably attributed to manmade global warming. Hence, only about 2 inches of Hurricane Harvey’s peak amount of 60 inches can be linked to manmade global warming.Landsea (2017) summarizes the arguments for more rainfall from tropical cyclones traveling over a warmer ocean. Intuitively, rainfall from hurricanes might be expected to increase with a warmer ocean, as a warmer atmosphere can hold more moisture. Simple thermodynamic calculations suggest that the amount of rainfall in the tropical latitudes would go up about 4% per oF [7% per oC] sea surface temperature increase. Examining a 300 mile radius circle for nearly all of the rain implies that about 10% more total hurricane rainfall for a warming of 2-2.5 F [1-1.5 C]. The Gulf of Mexico has warmed about 0.7 oF [0.4 oC] in the last few decades. Assuming that all of this warming is due to manmade global warming suggests that roughly 3% of hurricane rainfall today can be reasonably attributed to manmade global warming. Hence, only about 2 inches of Hurricane Harvey’s peak amount of 60 inches can be linked to manmade global warming.
-----
6.4 Hurricane Irma

Hurricane Irma made landfall on September 10, 2017 as a Category 4 hurricane. Hurricane Irma set several records. Irma was the 5th strongest Atlantic hurricane on record. Irma was the 2nd strongest Atlantic storm in recorded history in terms of its accumulated cyclone energy – a function both of intensity (wind speed) and duration of the storm. Irma is tied with the 1932 Cuba Hurricane for the longest time spent as a Category 5 hurricane. Hurricane Irma maintained 185-mph winds for 37 hours — longer than any storm on record globally.[3]

Irma formed and rapidly intensified to a major hurricane in the eastern Atlantic, where sea surface temperatures were 26.5 oC (80 oF). The rule of thumb for a major hurricane is 28.5 oC. Clearly, simple thermodynamics associated with SST were not driving this intensification, but rather favorable atmospheric dynamics. In particular, wind shear was very weak. Further, the atmospheric circulation field (e.g. stretching deformation) was very favorable for spinning up this hurricane (Curry, 2017).

While the media made much ado about a global warming link to Irma’s intensity, there have been no published journal articles to date that have examined this issue. This is presumably because the sea surface temperatures during Irma’s development and intensification were relatively cool.
-----
6.6 Conclusions

Convincing detection and attribution of individual extreme weather events such as hurricanes requires:
a very long time series of high-quality observations of the extreme event
an understanding of the variability of extreme weather events associated with multi-decadal ocean oscillations, which requires at least a century of observations
climate models that accurately simulate both natural internal variability on timescales of years to centuries and the extreme weather events

Of the four hurricanes considered here, only the rainfall in Hurricane Harvey passes the detection test, given that it is an event unprecedented in the historical record for a continental U.S. landfalling hurricane. Arguments attributing the high levels of rainfall to near record ocean heat content in the western Gulf of Mexico are physically plausible. The extent to which the high value of ocean heat content in the western Gulf of Mexico can be attributed to manmade global warming is debated. Owing to the large interannual and decadal variability in the Gulf of Mexico (e.g. ENSO), it is not clear that a dominant contribution from manmade warming can be identified against the background internal climate variability (Chapter 4).

Saturday, March 02, 2019

Yet another example of climate science that isn’t

Ross McKitrick destroys a new paper by Ben Santer et al. in Nature Climate Change.

Here is the link.

After reading the article, you will appreciate that those who are quick to call others "climate deniers" are, perhaps, those most likely to lack the understanding of what might be wrong with the "alarmist" view.

Here are some excerpts.
-----------------------------------------------------
Ben Santer et al. have a new paper out in Nature Climate Change arguing that with 40 years of satellite data available they can detect the anthropogenic influence in the mid-troposphere at a 5-sigma level of confidence. This, they point out, is the “gold standard” of proof in particle physics, even invoking for comparison the Higgs boson discovery in their Supplementary information.

----------


Their results are shown in the above Figure. It is not a graph of temperature, but of an estimated “signal-to-noise” ratio. The horizontal lines represent sigma units which, if the underlying statistical model is correct, can be interpreted as points where the tail of the distribution gets very small. So when the lines cross a sigma level, the “signal” of anthropogenic warming has emerged from the “noise” of natural variability by a suitable threshold. They report that the 3-sigma boundary has a p value of 1/741 while the 5-sigma boundary has a p value of 1/3.5million. Since all signal lines cross the 5-sigma level by 2015, they conclude that the anthropogenic effect on the climate is definitively detected.

I will discuss four aspects of this study which I think weaken the conclusions considerably: (a) the difference between the existence of a signal and the magnitude of the effect; (b) the confounded nature of their experimental design; (c) the invalid design of the natural-only comparator; and (d) problems relating “sigma” boundaries to probabilities.
---------
Confounded signal design
So they haven’t identified a distinct anthropogenic fingerprint. What they have detected is that observations exhibit a better fit to models that have the Figure 2 warming pattern in them, regardless of cause, than those that do not.
--------
Invalid natural-only comparator
The above argument would matter less if the “nature-only” comparator controlled for all known warming from natural forcings. But it doesn’t, by construction.

Everything depends on how valid the natural variability comparator is. We are given no explanation of why the authors believe it is a credible analogue to the natural temperature patterns associated with post-1979 non-anthropogenic forcings. It almost certainly isn’t.
----------
t-statistics and p values
The probabilities associated with the sigma lines in Figure 1 are based on the standard Normal tables. People are so accustomed to the Gaussian (Normal) critical values that they sometimes forget that they are only valid for t-type statistics under certain assumptions, that need to be tested. I could find no information in the Santer et al. paper that such tests were undertaken.

I will present a simple example of a signal detection model to illustrate how t-statistics and Gaussian critical values can be very misleading when misused.

A simple way of investigating causal patterns in time series data is using an autoregression. Simply regress the variable you are interested in on itself aged once plus lagged values of the possible explanatory variables. Inclusion of the lagged dependent variable controls for momentum effects, while the use of lagged explanatory variables constrains the correlations to a single direction: today’s changes in the dependent variable cannot cause changes in yesterday’s values of the explanatory variables. This is useful for identifying what econometricians call Granger causality: when knowing today’s value of one variable significantly reduces the mean forecast error of another variable.

My temperature measure (“Temp”) is the average MT temperature anomaly in the weather balloon records. I add up the forcings into “anthro” (ghg + o3 + aero + land) and “natural” (tsi + volc + ESOI).

I ran the regression Temp = a1 + a2* l.Temp + a3*l.anthro +a4* l.natural where a lagged value is denoted by an “l.” prefix. The results over the whole sample length are:

The coefficient on “anthro” is more than twice as large as that on “natural” and has a larger t-statistic. Also its p-value indicates a probability of detection if there were no effect of 1 in 2.4 billion. So I could conclude based on this regression that anthropogenic forcing is the dominant effect on temperatures in the observed record.

The t-statistic on anthro provides a measure much like what the Santer et al. paper shows. It represents the marginal improvement in model fit based on adding anthropogenic forcing to the time series model, relative to a null hypothesis in which temperatures are affected only by natural forcings and internal dynamics. Running the model iteratively while allowing the end date to increase from 1988 to 2017 yields the results shown below in blue (Line #1):


It looks remarkably like Figure 1 from Santer et al., with the blue line crossing the 3-sigma level in the late 90s and hitting about 8 sigma at the peak.

But there is a problem. This would not be publishable in an econometrics journal because, among many other things, I haven’t tested for unit roots. I won’t go into detail about what they are, I’ll just point out that if time series data have unit roots they are nonstationary and you can’t use them in an autoregression because the t-statistics follow a nonstandard distribution and Gaussian (or even Student’s t) tables will give seriously biased probability values.

I ran Phillips-Perron unit root tests and found that anthro is nonstationary, while Temp and natural are stationary. This problem has already been discussed and grappled with in some econometrics papers (see for instance here and the discussions accompanying it, including here).

A possible remedy is to construct the model in first differences. If you write out the regression equation at time t and also at time (t-1) and subtract the two, you get d.Temp = a2* l.d.Temp + a3*l.d.anthro +a4*l.d.natural, where the “d.” means first difference and “l.d.” means lagged first difference. First differencing removes the unit root in anthro (almost – probably close enough for this example) so the regression model is now properly specified and the t-statistics can be checked against conventional t-tables.

The coefficient magnitudes remain comparable but—oh dear—the t-statistic on anthro has collapsed from 8.56 to 1.32, while those on natural and lagged temperature are now larger. The problem is that the t-ratio on anthro in the first regression was not a t-statistic, instead it followed a nonstandard distribution with much larger critical values. When compared against t tables it gave the wrong significance score for the anthropogenic influence. The t-ratio in the revised model is more likely to be properly specified, so using t tables is appropriate.

The corresponding graph of t-statistics on anthro from the second model over varying sample lengths are shown in Figure 4 as the green line (Line #2) at the bottom of the graph. Signal detection clearly fails.

What this illustrates is that we don’t actually know what are the correct probability values to attach to the sigma values in Figure 1. If Santer et al. want to use Gaussian probabilities they need to test that their regression models are specified correctly for doing so. But none of the usual specification tests were provided in the paper, and since it’s easy to generate a vivid counterexample we can’t assume the Gaussian assumption is valid.