Thursday, November 28, 2019

How to Increase Taxes on the Rich

Here is Greg Mankiw's article on taxes.

GM makes some excellent points, not the least of which is how deceptively the media and politicians talk about various tax plans.
-----------------------------------
I would like to begin with what I hope is a noncontroversial proposition: Rich people are 
not all the same. 

I bring up this fact because we live in a time when inequality is high, when demonizing 
the rich is popular in some political circles, and when various policies are being proposed to
increase the redistribution of economic resources. In this brief essay, I won’t comment on 
whether we should redistribute more. That question is hard, and it involves less economics than political philosophy, which is not my comparative advantage. Rather, I will assume we are going
to increase redistribution and discuss alternative ways to do so. As we evaluate the many
proposals, it is worth keeping in mind some of the ways rich people differ from one another. 

Consider two hypothetical CEOs of major corporations. Each of them earns, say, $10 
million a year, putting them in the top one-hundredth of one percent of the income distribution. 
But other than in their incomes, the two executives are very different. 

One executive, whom I will call Sam Spendthrift, uses all his money living the high life. 
He drinks expensive wine, drives Ferraris, and flies his private jet to lavish vacations. He gives 
large amounts to political parties and candidates, hoping these contributions will get him an 
ambassadorship someday. When that does not work, he spends large sums financing his own 
quixotic run for the presidency. 

The other executive, whom I will call Frank Frugal, makes just as much money as Sam, 
but he takes a different approach to his good fortune. He lives modestly, saving most his earnings 
and accumulating a sizable nest egg. He forgoes the opportunity to influence the political 
process. Instead, he invests his money in successful start-ups, which he is quite good at 
identifying. He plans to leave some of his wealth to his children, grandchildren, nephews, and 
nieces. Most of his wealth, however, he plans to bequeath to the endowment of his alma mater, 
where it will support financial aid for generations to come. 

Ask yourself: Who should pay higher taxes? Sam Spendthrift or Frank Frugal? 

I can see the case for taxing them the same. After all, they have the same earnings. One 
might argue that how they choose to spend their money is not an issue for the government to 
judge or influence. 

I am more inclined, however, to think Mr. Frugal should be taxed less than Mr. 
Spendthrift. The argument is Pigovian. Mr. Frugal’s behavior confers positive externalities, both 
on members of his extended family and on the beneficiaries of his charitable bequest. Moreover, 
by increasing the economy’s capital stock, his saving reduces the return to capital and increases 
labor productivity and real wages. If one is concerned about the income distribution, this 
pecuniary externality can also be viewed as desirable. 

What I find hard to believe is that Mr. Frugal should face higher taxes than Mr. 
Spendthrift. But that is what occurs under some of the policy proposals now being widely 
discussed. I am referring in particular to the wealth taxes advocated by Senators Elizabeth 
Warren and Bernie Sanders, both of whom are now running for the Democratic nomination for 
president. These taxes, if successfully implemented, would hit Frank Frugal hard but would be 
much easier on Sam Spendthrift. 

There are better ways to redistribute economic resources, ways that do not penalize 
frugality. In particular, I am attracted to the policy now being championed by Andrew Yang, the 
former tech executive and entrepreneur who is also running for the Democratic nomination. Mr. 
Yang proposes enacting a value-added tax and using the revenue to provide every American 
adult with a universal basic income of $1,000 per month, which he calls a “freedom dividend.” 

It’s easy to see how the Yang proposal would work. Value-added taxes are essentially 
sales taxes, and they have proven remarkably efficient in raising revenue in many European 
countries. And because the dividend is universal, it would be simple to administer. 

The idea of a universal basic income is not new, but it is bold. Of course, the idea has its 
critics. But, from my perspective, the critics often rely on arguments that do not hold up under 
scrutiny. Let me use an example to explain why. 

Consider two plans aimed at providing a social safety net. (For our purposes here, let’s 
keep things simple by assuming both are balanced-budget plans.) 

A. A means-tested transfer of $1,000 per month aimed at the truly need. The full 

amount goes to those with zero income. The transfer is phased out: Recipients 
lose 20 cents of it for every dollar of income they earn. These transfers are 
financed by a progressive income tax: The government taxes 20 percent on all 
income above $60,000 per year. 

B. A universal transfer of $1,000 per month for every person, financed by a 20
percent flat tax on all income. 

Would you prefer to live in a society with safety net A or safety net B? 

When I asked this question to a group of Harvard undergraduates, over 90 percent 
concluded that plan A is better. Their arguments were roughly as follows: Plan A targets transfer 
payments on those who need the money most. As a result, it requires a smaller tax increase, and 
the taxes are levied only on those with high incomes. Plan B is crazy. Why should rich people 
like Bill Gates and Jeff Bezos receive a government transfer? They don’t need it, and we would 
need to raise taxes more to pay for it. 

Superficially, those arguments might seem compelling, but here is the rub: The two 
policies are equivalent. Look at the net payment— that is, taxes less transfers. Everyone is 
exactly the same under the two plans. A person with zero income gets $12,000 per year in both 
cases. A person with annual income of $60,000 gets zero in both cases. A person with income of 
$160,000 pays $20,000 in both cases. And everyone always faces an effective marginal tax rate 
of 20 percent. 

In other words, everyone’s welfare is identical under the two policies, and everyone faces 
the same incentive. The difference between plan A and plan B is only a matter of framing. 

This example teaches two lessons. First, if you find something like plan A attractive and 
you recognize the equivalence of plan A and plan B, you should find something like plan B 
attractive. Many critics of universal basic income fail to make this leap because they do not 
notice the equivalence of these two approaches. Once you see the equivalence, Plan B is easier to 
embrace. And it looks even better when you realize that universal benefits and flat taxes are 
easier to administer than means-tested benefits and progressive taxes. 

The second lesson from this example is how misleading it can be to focus on taxes and 
transfers separately. It is accurate to say that Plan A has lower taxes, more progressive taxes, and 
more progressive transfers. But so what? Those facts do not stop it from being precisely 
equivalent to plan B. The equivalence is clear only when taxes and transfers are considered 
together. 

I stress this fact because it is all too common to see academic papers and media articles 
describe the distribution of taxes without considering the distribution of the transfers they 
finance. Such presentations of the data are incomplete to the point of being deceptive. With 
incomplete reporting, one might be led conclude that a society using plan A is more progressive 
than a society using plan B. But that is not the case because the policies are functionally the 
same. 

Finally, I should note that the safety net described by either plan A or plan B is just a 
version of the negative income tax that Milton Friedman proposed in his book Capitalism and 
Freedom back in 1962. I remember reading about it as a student 40 years ago and thinking it was 
a good idea. And I was not alone in that judgment: In 1968, more than 1000 economists signed a 
letter endorsing such a plan, including luminaries like James Tobin, Paul Samuelson, Peter 
Diamond, and Martin Feldstein. Andrew Yang’s version, which focuses on taxing consumption 
rather than income, is even better than Friedman’s, because it wouldn’t distort the incentive to 
save and invest. 

Could 1000 economists all be wrong? Well, yes, they could. But my judgment is that, in 
this case, they are not. A universal basic income, financed by an efficient tax like a value-added 
tax (or perhaps a carbon tax), might be a social safety net well worth considering. 

I am not predicting that this idea will have much success in the current political 
environment. But I find it reassuring that good ideas keep popping up in the political discourse. 
Maybe someday they might even influence actual policy.

Sunday, November 24, 2019

New York State Government’s “solution” to CO2 emissions shows, once again, that Government is not the solution.

From the Institute For Energy Research.

Trust Government to screw things up and make a problem worse.
----------------------------------------
New York’s Con Ed has had two major power outages within a two-week period—and the outages probably will continue given the state’s new policies that will only destabilize its electric grid further. The state will not allow new natural gas pipelines, which has forced moratoria on new natural gas hook-ups in Westchester County, Brooklyn, Queens, and on Long Island. Furthermore, the state has approved a climate bill that mandates an 85-percent reduction in all greenhouse gas emissions—from vehicles, manufacturing, agriculture, etc.—and “offsets” for the remaining 15 percent by 2050 from 1990 levels. The legislation also mandates the state to generate 70 percent of its electricity from renewables by 2030 and 100 percent by 2040. In 2018, 26.4 percent of the state’s electricity came from renewable energy, which was less than the 28 percent in 2017. The bill, the Climate and Community Protection Act, establishes a Climate Action Council to ensure that the state meets its targets.

The Outages

An outage on July 13 affected 42 blocks of Manhattan between Fifth Avenue and the Hudson River for five hours. The loss of power interrupted Broadway shows, knocked out traffic signals, disrupted subway service, and stalled more than 400 elevators that were in use. Over 72,000 of the utility’s metered customers had no electricity due to what was determined to be an equipment failure (transformer fire).

On July 22, a second outage hit thousands of customers in parts of Brooklyn due to the summer heatwave. On the previous day, Sunday, July 21, scattered outages kept over 50,000 customers in the dark throughout parts of the city and Westchester County. As temperatures spiked, circuits in parts of Brooklyn failed, which meant that Con Ed had to shift some load onto overhead lines that were already delivering power to customers affected by the heatwave. To avoid exceeding the capacity of the lines and possibly damaging equipment, Con Ed switched off electricity starting Sunday evening. Demand on Sunday reached a weekend record of 12,063 megawatts, slightly higher than the company had predicted, but still below the system’s maximum capacity of 13,200 megawatts.

New York’s Debilitating Policies

The moratoria on natural gas hook-ups come from the lack of adequate pipeline capacity that results from the state not approving permits. Since 2016, the New York Department of Environmental Conservation has blocked permits for new natural gas pipelines that would increase natural gas supplies to New York. The latest pipeline permit refusal was for the Northeast Supply Enhancement project, a 24-mile pipeline that would deliver about 400 million cubic feet of gas per day from coastal New Jersey to the western end of Long Island. It was the second time the state has blocked the project. This is a deliberate restriction of supplies to consumers.

In response to the ongoing blockade of the pipelines, two of New York’s biggest gas utilities, Consolidated Edison and National Grid, have indicated that they will not provide new gas connections to customers in their service areas in and around New York City. About 800,000 New Yorkers are living in communities subject to gas-hookup moratoria. By locking out natural gas, New Yorkers are facing increased carbon dioxide emissions and higher costs since shortages of natural gas hinder switching buildings from using fuel oil. Switching from heating oil to natural gas reduces carbon dioxide emissions by about 27 percent. Fuel switching has reduced heating-oil consumption in the region by about 900,000 barrels per year and carbon dioxide emissions in New York by about 200,000 tons per year. Without natural gas as an option, these reductions will not continue.

Besides heating with natural gas, gas-fired electricity generation in New York has nearly doubled since 2004. As New York is retiring two of its nuclear reactors at Indian Point at the direction of Governor Cuomo, the state will become even more dependent on natural gas for electricity production. Together, these reactors generate about 25 percent of the electricity used in New York City, and do not produce carbon dioxide or criteria pollutants.

New Yorkers already face higher electricity prices than the average U.S. electricity customer. In 2018, residential electricity prices in New York averaged 18.53 cents per kilowatt-hour—44 percent higher than average U.S. residential electricity price of 12.89 cents per kilowatt-hour.

Given the New York bill mandates 100 percent renewables by 2040, the state believes that wind and solar will be the answer to its electricity needs. But opposition is looming in upstate New York communities where about 300 megawatts of new wind capacity and 100 megawatts of new solar capacity were halted. According to the American Wind Energy Association, no wind projects are currently under construction in New York. As a result, Governor Cuomo is looking toward offshore wind. When he signed the Climate and Community Protection Act, he announced the selection of two bids for offshore wind projects, totaling almost 1,700 megawatts—an 880-megawatt project 30 miles off the coast of Long Island and an 816 megawatt project 14 miles from Manhattan that will provide electricity to New York City.

One of the bill’s mandates requires utilities to buy 9,000 megawatts of offshore wind-generated electricity by 2035. That would result in 900 10-megawatt turbines to be constructed off the coast of New York City and Long Island. Offshore wind is expensive. Based on current state estimates for similar projects, the capital costs for these wind turbines will total $48 billion, which the ratepayers will have to pay. If this and other targets are not met for new renewables and energy storage capacity, the Public Service Commission (PSC) will demand that the utilities buy renewable energy credits or pay penalties.

Utilities are also required to fund 6,000 megawatts of solar panels, which is more than 21 times the 281 megawatts of new solar panels for which the state subsidized construction last year. Assuming it takes six acres of panels to generate one megawatt, and that sites will be a maximum of 150 acres, this mandate will necessitate adding at least 56 square miles of solar panels (larger than the city of Buffalo) across 240 sites.

The bill also requires subsidies for 3,000 megawatts of “energy storage capacity” to back-up the intermittent sources (wind and solar) that are not always available to generate electricity.

Conclusion

New Yorkers can look forward to higher electricity prices and future blackouts as its electricity sector is being transformed by the Climate and Community Protection Act, which mandates 100-percent renewable generation by 2040. The state’s residential customers are already paying 44 percent more for electricity than the average U.S. residential price. The choices the state government is making are clearly not in the best interest of its residents because they will raise prices and make the New York electric system less reliable. The consequences of these actions will not be felt immediately, but they will be real for residential and business consumers when they begin to occur after the next election. New Yorkers have become guinea pigs in an experiment they may very well come to regret.

Friday, November 15, 2019

Arm teachers if you want to stop mass shootings in schools

John Lott at townhall.com.

JL is an expert concerning the effect of gun laws.  His opponents charge that he is a pawn of gun groups, that his research is flawed, and that he and his research are dishonest.  Numerous critics have published research purporting to show that JL's research conclusions are wrong.  I have read JL's research and that of his critics.  It is JL's research that is correctly done and his critics' research that is not.  In many cases, his critics even lie about the data and what it implies.

JL is the right source to go to for the truth, not his critics.

Contrast the facts and persptective presented by JL, below, with what your sources tell you.  It's an easy way to tell if your sources are credible.
---------------------------------------
Another school shooting, this time by a 15-year-old at Saugus High School in California, another quest for answers. Yet, 20 years after Columbine, the United States is still looking for how to stop mass public shootings. The rest of the world, where mass public shootings are actually much more common, is also looking for solutions. Russia, France, Finland, and Norway are among the European countries that have experienced far more deaths per capita from these attacks.

Change is coming, if slowly, in the United States. Earlier this year Florida and Texas passed major improvements to their laws that are significantly increasing the number of teachers with guns at school. Both bills received strong support from those states’ Republican governors.

Florida’s bill removed a limitation that only allowed non-classroom-based teachers to defend the classroom. Texas removed the cap on the number of school personnel that can carry firearms at schools.

It isn’t by coincidence that every mass public shooting in Europe since at least 1990 has occurred in an area where general, law-abiding citizens are banned from carrying firearms for protection, and for recent mass public shootings in New Zealand, Brazil, and the Netherlands. That has also held true for 94% of such attacks in the U.S. since 1950.

Moms Demand Action, a gun control advocacy group funded by Michael Bloomberg, argues that the bills in Texas and Florida “would make school a much more dangerous place for our children.” By contrast, President Trump keeps proposing arming teachers and staff at schools, saying: “I’m telling you that would work.”

But 20 states currently allow teachers and staff to carry guns to varying degrees on school property, so we don’t need to guess about how safe these schools are. Some states have had these rules for decades. In recent decades, only California and Rhode Island have moved to be more restrictive. The Crime Prevention Research Center, of which I am the president, has just released a new report looking at all the school shootings of any type in the United States from 2000 through 2018.

During these years, Utah, New Hampshire, Rhode Island, and parts of Oregon allowed all permitted teachers and staff to carry, without any additional training requirements. Other states leave it to the discretion of the local superintendent or school board. As of December 2018, teachers carried in more than 30 percent of Texas school districts. And in September 2018, Ohio teachers were carrying in over 200 school districts.

Roughly 5 percent of Utah teachers carry permitted concealed handguns at school, according to Clark Aposhian, the senior member of Utah's Concealed Firearm Review Board. Support staff — including janitors, librarians, secretaries, and lunch staff — carry at a higher estimated rate of between 10 and 12 percent.

Carrying in a school is no different than in a grocery store, movie theater, or restaurant. Seventeen million Americans have concealed handgun permits — which is 8.75 percent of the adult population outside of permit-unfriendly California and New York. Nobody knows whether the person next to them might have a gun, unless it happens to be needed.

We found 306 cases of gunshots on school property, 48 of which were suicides. Not counting suicides, 193 people died and 267 were injured in these incidents. Four cases were simply instances of accidental gunshots by police officers.

The rate of shootings and people killed by them has increased significantly since 2000. The yearly average number of people who died between 2001 and 2008 versus 2009 and 2018 has doubled (regardless of whether one excludes gang fights and suicides).

This increase has occurred entirely among schools that don’t let teachers carry guns. Outside of suicides or gang violence in the wee hours of the morning, there has yet to be a single case of someone being wounded or killed from a shooting at a school that allows teachers and staff to carry guns during school hours. Indeed, the one shooting occurred at 2:20 AM in a parking lot when no armed teachers would have been around.

There haven't been any serious accidents. No student has ever got a hold of their teacher’s gun, and the one accidental discharge by a teacher occurred outside of school hours. The teacher had only very minor injuries.

School insurance premiums haven‘t risen at all from teachers being allowed to carry. “From what I’ve seen in Utah, [school insurance] rates have not gone up because of guns being allowed,” says Curt Oda, former president of the Utah Association of Independent Insurance Agents. Nor has a survey of six other states shown any increase in insurance costs.

Police are important, but they can't be everywhere at once. Even if an officer is stationed at the school, mass public shooters are most likely to target the officer first. We’ve seen this time and again at malls, nightclubs, and schools. This also makes the job of the police much safer. Concealed carry means that killers won't know who is armed. Even if they take an officer by surprise, they must worry that they are revealing their position to someone else who can stop them.

Not a single person has been injured or killed by a teacher’s gun. But even more amazing, not a single person has been shot during school hours. Gun control groups may paint fearful pictures of what might go wrong with teachers carrying, but that fear gets harder to push given these programs’ successes. This research provides evidence that armed teachers deter attacks.

It is past time for us to do something that really works. With another mass public shooting in California, it is time for California to recognize that its gun control laws might actually be part of the problem. Let’s stop leaving our schoolchildren as sitting ducks.

Wednesday, November 13, 2019

The decline of the United States

Here is a column by Walter Williams, Professor of Economics at George Mason University.

WW is on target.  Freedom in the US has been declining for years and is now doing so at an increasing rate.

The sad part is that those who will suffer the consequences will fail to recognize that they caused those consequences by not defending the freedom of others and by insisting on a free lunch at the expense of others at every opportunity.
----------------------------------------------------
A recent survey conducted by the Victims of Communism and polled by YouGov, a research and data firm, found that 70% of millennials are likely to vote socialist and that one in three millennials saw communism as “favorable.”

Let examine this tragic vision in light of the Fraser Institute’s recently released annual study “Economic Freedom of the World,” prepared by Professors James Gwartney, Florida State University; Robert A. Lawson and Ryan Murphy of Southern Methodist University; and Joshua Hall, West Virginia University, in cooperation with the Economic Freedom Network.

Hong Kong and Singapore maintained their lead as the world’s most economically free countries — although China’s heavy hand threatens Hong Kong’s top ranking. Rounding out the top 10 are New Zealand, Switzerland, the United States, Ireland, the United Kingdom, Canada, Australia and Mauritius. By the way, after having fallen to 16th in 2016, the U.S. has staged a comeback to being in the top five economically free countries in the world.

What statistics go into the Fraser Institute’s calculation of economic freedom? The report measures the ability of individuals to make their own economic decisions by analyzing the policies and institutions of 162 countries and territories. These include regulation, freedom to trade internationally, size of government, sound legal system, private property rights and government spending and taxation.

Fraser Institute scholar Fred McMahon says, “Where people are free to pursue their own opportunities and make their own choices, they lead more prosperous, happier and healthier lives.” The evidence for his assessment is: Countries in the top quartile of economic freedom had an average per-capita GDP of $36,770 in 2017 compared with $6,140 for bottom quartile countries. Poverty rates are lower. In the top quartile, 1.8% of the population experienced extreme poverty ($1.90 a day) compared with 27.2% in the lowest quartile. Life expectancy is 79.5 years in the top quartile of economically free countries compared with 64.4 years in the bottom quartile.

The Fraser Institute’s rankings of other major countries include Japan (17th), Germany (20th), Italy (46th), France (50th), Mexico (76th), India (79th), Russia (85th), China (113th) and Brazil (120th). The least free countries are Venezuela, Argentina, Ukraine and nearly every African country with the most notable exception of Mauritius. By the way, Argentina and Venezuela used to be rich until they bought into socialism.

During the Cold War, leftists made a moral equivalency between communist totalitarianism and democracy. W. E. B. Du Bois, writing in the National Guardian (1953) said, “Joseph Stalin was a great man; few other men of the 20th century approach his stature.” Walter Duranty called Stalin “the greatest living statesman … a quiet, unobtrusive man.” George Bernard Shaw expressed admiration for Mussolini, Hitler and Stalin. Economist John Kenneth Galbraith visited Mao’s China and praised Mao Zedong and the Chinese economic system. Gunther Stein of the Christian Science Monitor also admired Mao and declared ecstatically that “the men and women pioneers of Yenan are truly new humans in spirit, thought and action.” Michel Oksenberg, President Jimmy Carter’s China expert, complained that “America (is) doomed to decay until radical, even revolutionary, change fundamentally alters the institutions and values,” and urged us to “borrow ideas and solutions” from China.

Leftists exempted communist leaders from the harsh criticism directed toward Adolf Hitler, even though communist crimes against humanity made Hitler’s slaughter of 11 million noncombatants appear almost amateurish. According to Professor R.J. Rummel’s research in “Death by Government,” from 1917 until its collapse, the Soviet Union murdered or caused the death of 61 million people, mostly its own citizens. From 1949 to 1976, Mao’s Communist regime was responsible for the death of as many as 76 million Chinese citizens.

Today’s leftists, socialists and progressives would bristle at the suggestion that their agenda differs little from that of past tyrants. They should keep in mind that the origins of the unspeakable horrors of Nazism, Stalinism and Maoism did not begin in the ’20s, ’30s and ’40s. Those horrors were simply the result of a long evolution of ideas leading to a consolidation of power in the central government in the quest for “social justice.”

So, how good are climate change models? Not so good.

From sepp.org, the Science & Environmental Policy Project.

If engineers were this far off, your iPhones wouldn't work, bridges would fall down, and skyscrapers would collapse.



A look back: Climategate

"My finest hour", by James Delingpole.

JD is on target.
---------------------------------------
Every journalist dreams of the scoop that will make his name. Ten years ago this month I finally got mine – but I’m still not altogether sure it was worth it. On the upside, my story went viral, got me a much bigger audience – from the the United States to Oz – and established my spiky, edgy reputation for in-your-face contrarianism. On the downside, though, for every ardent fan it made me it probably lost me a couple more: ‘But he used to be so funny and clever. Now he’s just one of those anti-science, climate change denier cranks…’.

You can search a whole lifetime for a scoop but when it comes, it often comes unbidden. Mine dropped into my lap when I was sitting at my desk one morning, wondering what to write next for my Telegraph blog, when I noticed an interesting story starting to break on the Watts Up With That? website. All I did was top, tail, adapt it and popularise it by giving it a bit of snark, context and spin. Then I nicked the title from a commenter called ‘Bulldust’ (an Aussie, as it happens). Et voilà! Climategate was born.

Climategate mattered because it offered the first solid proof that the scientific establishment wasn’t being altogether honest about man-made global warming. Up until that point, one or two of us had had our suspicions. But this was the breakthrough; the moment when the alarmists were caught red-handed with egg over their face and their trousers down. Someone – to this day, anonymous – had dumped onto the internet a huge cache of documents and correspondence retrieved from the Climatic Research Unit (CRU) at Britain’s University of East Anglia – one of the world’s main gatekeepers of climate science research. Finally, we could discover what the scientists most assiduously promoting the climate change scare narrative were saying to one another behind closed doors.

Many of them were lead authors on the Assessment Reports produced periodically by the United Nation’s Intergovernmental Panel on Climate Change. These were the ‘experts’ on whose word governments were invited to take the radical action apparently necessary to remedy one of the greatest problems the world had ever seen take place: ‘global warming’, as it was known in the early days.

What the emails suggested was that in private these scientists weren’t nearly so confident about the scale and nature of the threat as they made out in public. Some doubted the reliability of their methodology, such as the various palaeoclimatological proxies (tree rings, etc.) used to estimate temperatures in the distant past. Others worried about the failure of real world temperatures to soar in quite the way their computer models had predicted.

In the aftermath of the scandal, a succession of whitewash enquiries – one, typically, led by a figure so parti pris it was described as ‘like putting Dracula in charge of the bloodbank’ – sought to play down the significance of these exchanges. But this was more than a case of just ordinary decent scientists, being human, expressing reasonable doubt about their field. These were men behaving more like political activists than dispassionate seekers after truth.

They were shown: contriving to destroy inconvenient data in order to evade FOI inquiries; attempting to shut down scientific journals which published studies unhelpful to their cause; viciously bullying dissenters; even trying to rewrite history, for example, to erase the widely recognised Medieval Warming Period.

True, Climategate did not offer definitive proof that the man-made climate scare is fabricated. But it did prove something very nearly as important: that the doom-laden grand narrative about climate change which teachers use to frighten children, which politicians use to justify more taxes and regulations, and which crony capitalists use to say ‘subsidise my planet-saving wind farm’ is based on a prospectus so flimsy that if an insurance salesman tried touting it on your doorstep you’d tell him just where he could shove it.

All that money we throw at ‘combating’ climate change – conservatively estimated a few years back at $1.5 trillion per annum – may well be a total waste. Sure ‘global warming’ might be a deadly threat – but if we’re going to use the ‘precautionary principle’ excuse so might lots of things, including alien invasion. Does that mean, we should divert two per cent of the global economy towards dotting the planet with anti-alien death lasers, just on the off chance?

If you’d asked me at the time of Climategate whether I’d still be writing about this stuff ten years hence I would have said: ‘No! God, no! The caravan will have moved on by then.’ But it hasn’t, has it? Instead it has accumulated more baggage, more freeloaders. In fact, by some bizarre inversion of logic, the less and less credible the evidence for the great global warming scare, the bigger and noisier and more powerful the Climate Industrial Complex has grown.

Though I did once write a book which psychoanalysed this phenomenon – it’s a mix of ‘follow the money’ greed, self-flagellating Gaia-worship which has filled the gap vacated by Christianity, and puritanical, misanthropic leftist control-freakery – I still find it extraordinary that this craziness has taken such a grip on our culture. Why on earth do we allow the unwashed hippy loons and overindulged trustafarians of Extinction Rebellion to block our streets? How come our chief arbiter of what to think about the environment is now a pig-tailed 16-year old autistic school drop-out from Sweden who probably got her climate facts from Ice Age 2?

Right now, the struggle against this nonsense seems pretty hopeless. But we sceptics do have at least two things on our side – time and economics. Time is doing us a favour by showing that none of the alarmists’ doomsday predictions are coming to pass. Economics – from the blackouts in South Australia caused by excessive reliance on renewables (aka unreliables) to the current riots and demonstrations taking place from France and the Netherlands to Chile over their governments’ green policies – suggest that common sense will prevail in the end. Bloody hell, though – taking it’s time, isn’t it?

Sunday, November 10, 2019

Climate change is not all about Carbon Dioxide

Here is an article, "Ancient air challenges prominent explanation for a shift in glacial cycles", byEric Wolff in Nature.

The message is that there is much more to climate change than CO2 and the Alarmists have it wrong.
-------------------------------------------
During the past 2.6 million years, Earth’s climate has alternated between warm periods known as interglacials, when conditions were similar to those of today, and cold glacials, when ice sheets spread across North America and northern Europe. Before about 1 million years ago, the warm periods recurred every 40,000 years, but after that, the return period lengthened to an average of about 100,000 years. It has often been suggested that a decline in the atmospheric concentration of carbon dioxide was responsible for this fundamental change. Writing in Nature, Yan et al.1 report the first direct measurements of atmospheric CO2 concentrations from more than 1 million years ago. Their data show that, although CO2 levels during glacials stayed well above the lows that occurred during the deep glacials of the past 800,000 years, the maximum CO2 concentrations during interglacials did not decline. The explanation for the change must therefore lie elsewhere.

Understanding what caused the shift in periodicity, known as the mid-Pleistocene transition (MPT), is one of the great challenges of palaeoclimate science. The 40,000-year periodicity that dominated until about 1 million years ago is easily explained, because the tilt of Earth’s spin axis relative to its orbit around the Sun varies between 22.1° and 24.5° with the same period. In other words, before the MPT, low tilts led to cooler summers that promoted the growth and preservation of ice sheets.

But after the MPT, glacial cycles lasted for two to three tilt cycles. Because the pattern of variation in Earth’s orbit and tilt remained unchanged, this implies that the energy needed to lose ice sheets2 had increased. One prominent explanation3 is that atmospheric levels of CO2 were declining, and eventually crossed a threshold value below which the net cooling effect of the decline allowed ice sheets to persist and grow larger.

Ancient air trapped in Antarctic ice can be extracted from cores drilled from the ice sheet, allowing the CO2 concentration to be measured directly, but the ice-core record extends to only 800,000 years ago4. Estimates of CO2 concentrations from earlier periods have been made by measuring the ratio of boron isotopes in shells found in ancient marine sediments5,6. This proxy measurement depends on a chemical equilibrium controlled by ocean acidity, which, in turn, is closely related to the atmospheric CO2 concentration.

But the estimates of CO2 levels inferred from such measurements are necessarily imprecise and must be verified using more-precise, direct measurements. Scientists have therefore formulated plans7 to find and retrieve deep ice cores that reach back to before the MPT (see go.nature.com/33mw4yk). One project has recently been funded by the European Union, and hopes to retrieve million-year-old ice in 2024.

Yan et al. tried another approach to finding similarly old ice, but nearer the surface of Antarctica. In regions known as blue-ice areas, the combination of ice flow against a mountain barrier and surface ice loss by wind scouring and sublimation (transformation of ice directly into water vapour) leads to upwelling of old ice towards the surface. The authors therefore studied two cores, 147 and 191 metres deep, that were drilled to bedrock in the blue-ice region near the Allan Hills in Antarctica (Fig. 1).

The researchers improved and applied a relatively new method8 to date this old ice. The concentration of argon-40 in Earth’s atmosphere is slowly increasing with time as it is produced from the radioactive decay of potassium-40. By measuring the ratios of argon isotopes in air extracted from cores, the age of ice can be determined. The authors also measured the ratios of deuterium (a heavy isotope of hydrogen) to hydrogen in the ice, which can be used as a proxy of temperature at the time the ice was deposited.

Yan and colleagues concluded that ice in the lowest 30 m of each core is up to 2.7 million years old. However, the uncertainty of 100,000 years in this dating precludes their samples from being matched to particular parts of Earth’s tilt cycle. Moreover, the authors found abrupt age discontinuities with depth in the cores, which suggests that the layers of ice within them have been disturbed. The authors therefore treated the measured concentrations of deuterium and CO2 as snapshots of climate and atmospheric composition that corresponded to an approximate age of the ice, rather than as an ordered time series. On the basis of the deuterium values, they make a plausible case that the observed range of measured CO2 values represents most of the actual glacial–interglacial range.

Unfortunately, in the oldest ice samples, there was evidence that the CO2 concentration had been artificially enhanced by gas produced from the breakdown of organic material at the base of the ice sheet. A few samples from about 2 million years ago were potentially not affected by this issue, but were insufficient in number to allow any conclusions to be drawn about the range of CO2 levels at that time.

However, Yan et al. obtained samples from about 1 million and 1.5 million years ago that they consider to be undisturbed by the artificial addition of CO2. In both periods, the maximum CO2 concentrations are similar to those of interglacials from the past 500,000 years, peaking at 279 parts per million (p.p.m.). But the minimum value of 214 p.p.m. is much higher than the lows of around 180 p.p.m. that occurred during recent glacial maxima (the periods that corresponded to the maximum extent of ice).

The authors conclude that the relationship between CO2 levels and Antarctic temperature was similar before and after the MPT. The fact that the pre-MPT ice does not contain very low ratios of deuterium to hydrogen that would be characteristic of extremely cold Antarctic temperatures, nor low CO2 levels characteristic of recent glacial maxima, is probably just a consequence of the shorter period of the glacial cycles. Such low values are generally not found in the first 40,000 years of post-MPT glacial cycles either.

Although Yan and colleagues’ data points cannot be placed within a tilt cycle, it seems likely that the CO2 concentrations are not very different at the crucial points in cycles when the ice sheet is either lost (before the MPT) or continues growing (after the MPT). This forces us to look elsewhere for the cause of the longer cycles, perhaps refocusing efforts on understanding whether changes to the nature of the ice-sheet bed caused by glacial erosion9 altered the characteristics of the ice sheets and their vulnerability to melting.

Yan and colleagues’ data add much-needed precision to the previously reported estimates of CO2 levels made using data from marine sediments5,6. However, their tantalizing snapshots of the pre-MPT world emphasize the need for a complete, undisturbed time series of greenhouse-gas concentrations that can be put into context with the climate cycles at that time. Let us hope that the planned new ice cores will provide that.