Monday, December 26, 2022

The FBI's downfall - the tip of the iceberg

 Here is Jonathan Turley on the FBI's shameful politicized behavior.

Jonathan Turley is the Shapiro Professor of Public Interest Law at George Washington University.

The primary threat to our Republic is the Government, including the Department of Justice and the FBI. A major secondary threat is the attitude of too many of our citizens who value forcing others to behave "acceptably" instead of valuing freedom first.
--------------------------

When the FBI Attacks Critics as “Conspiracy Theorists,” It’s Time to Reform the Bureau.

Below is my column in the Hill on the need for a new “Church Committee” to investigate and reform the Federal Bureau of Investigation (FBI) after years of scandals involving alleged political bias. In response to criticism over its role in Twitter’s censorship system, the FBI lashed out against critics as “conspiracy theorists” spreading disinformation. However, it still refuses to supply new information on other companies, beyond Twitter, that it has paid to engage in censorship.

Here is the column:

“Conspiracy theorists … feeding the American public misinformation” is a familiar attack line for anyone raising free-speech concerns over the FBI’s role in social media censorship. What is different is that this attack came from the country’s largest law enforcement agency, the FBI — and, since the FBI has made combatting “disinformation” a major focus of its work, the labeling of its critics is particularly menacing.

Fifty years ago, the Watergate scandal provoked a series of events that transformed not only the presidency but federal agencies like the FBI. Americans demanded answers about the involvement of the FBI and other federal agencies in domestic politics. Ultimately, Congress not only investigated the FBI but later impanelled the Church Committee to investigate a host of other abuses by intelligence agencies.

A quick review of recent disclosures and controversies shows ample need for a new Church Committee:
The Russian investigations

The FBI previously was at the center of controversies over documented political bias. Without repeating the long history from the Russian influence scandal, FBI officials like Peter Strzok were fired after emails showed open bias against presidential candidate Donald Trump. The FBI ignored warnings that the so-called Steele dossier, largely funded by the Clinton campaign, was likely used by Russian intelligence to spread disinformation. It continued its investigation despite early refutations of key allegations or discrediting of sources.
Biden family business

The FBI has taken on the character of a Praetorian Guard when the Biden family has found itself in scandals.

For example, there was Hunter Biden’s handgun, acquired by apparently lying on federal forms. In 2018, the gun allegedly was tossed into a trash bin in Wilmington, Del., by Hallie Biden, the widow of Hunter’s deceased brother and with whom Hunter had a relationship at the time. Secret Service agents reportedly appeared at the gun shop with no apparent reason, and Hunter later said the matter would be handled by the FBI. Nothing was done despite the apparent violation of federal law.

Later, the diary of Hunter’s sister, Ashley, went missing. While the alleged theft normally would be handled as a relatively minor local criminal matter, the FBI launched a major investigation that continued for months to pursue those who acquired the diary, which reportedly contains embarrassing entries involving President Biden. Such a massive FBI deployment shocked many of us, but the FBI built a federal case against those who took possession of the diary.
Targeting Republicans and conservatives

Recently the FBI was flagged for targeting two senior House Intelligence Committee staffers in grand jury subpoenas sent to Google. It has been criticized for using the Jan. 6 Capitol riot investigations to target conservative groups and GOP members of Congress, including seizing the phone of one GOP member.

The FBI also has been criticized for targeting pro-life violence while not showing the same vigor toward pro-choice violence.
Hunter’s laptop

While the FBI was eager to continue the Russian investigations with no clear evidence of collusion, it showed the opposite inclination when given Hunter Biden’s infamous laptop. The laptop would seem to be a target-rich environment for criminal investigators, with photos and emails detailing an array of potential crimes involving foreign transactions, guns, drugs and prostitutes. However, reports indicate that FBI officials moved to quash or slow any investigation.

The computer repairman who acquired the laptop, John Paul Mac Isaac, said he struggled to get the FBI to respond and that agents made thinly veiled threats regarding any disclosures of material related to the Biden family; he said one agent told him that “in their experience, nothing ever happens to people that don’t talk about these things.”
The ‘Twitter Files’

The “Twitter Files” released by Twitter’s new owner, Elon Musk, show as many as 80 agents targeting social-media posters for censorship on the site. This included alleged briefings that Twitter officials said was the reason they spiked the New York Post’s Hunter Biden laptop story before the 2020 election.

The FBI sent 150 messages on back channels to just one Twitter official to flag accounts. One Twitter executive expressed unease over the FBI’s pressure, declaring: “They are probing & pushing everywhere they can (including by whispering to congressional staff).”

We also have learned that Twitter hired a number of retired FBI agents, including former FBI general counsel James Baker, who was a critical and controversial figure in past bureau scandals over political bias.
Attacking critics

It is not clear what is more chilling — the menacing role played by the FBI in Twitter’s censorship program, or its mendacious response to the disclosure of that role. The FBI has issued a series of “nothing-to-see-here” statements regarding the Twitter Files.

In its latest statement, the FBI insists it did not command Twitter to take any specific action when flagging accounts to be censored. Of course, it didn’t have to threaten the company — because we now have an effective state media by consent rather than coercion. Moreover, an FBI warning tends to concentrate the minds of most people without the need for a specific threat.

Finally, the files show that the FBI paid Twitter millions as part of this censorship system — a windfall favorably reported to Baker before he was fired from Twitter by Musk.
Criticizing the FBI is now ‘disinformation’

Responding to the disclosures and criticism, an FBI spokesperson declared: “The men and women of the FBI work every day to protect the American public. It is unfortunate that conspiracy theorists and others are feeding the American public misinformation with the sole purpose of attempting to discredit the agency.”

Arguably, “working every day to protect the American public” need not include censoring the public to protect it from errant or misleading ideas.

However, it is the attack on its critics that is most striking. While the FBI denounced critics of an earlier era as communists and “fellow travelers,” it now uses the same attack narrative to label its critics as “conspiracy theorists.”

After Watergate, there was bipartisan support for reforming the FBI and intelligence agencies. Today, that cacophony of voices has been replaced by crickets, as much of the media imposes another effective blackout on coverage of the Twitter Files. This media silence suggests that the FBI found the “sweet spot” on censorship, supporting the views of the political and media establishment.

As for the rest of us, the FBI now declares us to be part of a disinformation danger which it is committed to stamping out — “conspiracy theorists” misleading the public simply by criticizing the bureau.

Clearly, this is the time for a new Church Committee — and time to reform the FBI.

Friday, December 23, 2022

Some truth about guns and gun laws

 Here is a link to testimony of John Lott before the Subcommittee on Crime, Terrorism, and Homeland Security, Committee on the Judiciary of the United States House of Representatives and the Senate.

JL is a leading expert on on the impact of gun laws.

This is a good source if you want to know the truth about guns and gun laws.

The Cancel Culture is alive and well – unfortunately

 Here is Jonathan Turley with yet another example of the unacceptable behavior of the Woke and Cancel Culture denizens.

Scientific American was lost years ago. Now, many other supposedly objective journals have joined it.

I have no respect for those who participate or encourage wokeness or canceling. I am for freedom, including speech, and for professional journals publishing competent papers no matter what the authors think about things outside their profession.

-------------------------------------

Webb of Lies? Astrophysicist Targeted Due to Study Exonerating James Webb of Being Anti-Gay.

The storied career of James Webbs as the second administrator of NASA (responsible for the Apollo missions) led to the naming of the space telescope in his honor. Now, however, he is the subject of a cancel campaign to remove his name after professors accused him of being anti-gay. That cancel campaign also now includes a black astrophysicist, Hakeem Oluseyi, who published a study exonerating Webb. He is reportedly being banned from leading journals after finding no evidence to support the claim. Regardless of the ultimate conclusions that one can reach on the Webb controversy, there should be universal concern over the growing intolerance for opposing views in academic institutional and journals.

The New York Times reported that Oluseyi, National Society for Black Physicists, was asked to look into the allegations made by physicist Chanda Prescod-Weinstein of the University of New Hampshire. She joined three other scientists in writing a Scientific American article demanding the renaming of the telescope because Webb “acquiesced to homophobic government policies during the 1950s and 1960s.”

The focus of the objection was the “Lavender Scare,” a period in which homosexual government officials faced intense investigations. President Dwight D. Eisenhower had declared in 1953 that homosexual government officials were a national security threat. That crackdown is the subject of a documentary film.

Many of us are familiar with this terrible period and how homosexual scientists and officials were often unable to serve their country due to the prejudice against their sexual orientation. The powerful movie “The Imitation Game” tells the story of World War II code-breaker and early computer pioneer Alan Turing, who was hounded over his homosexuality. It is a disgraceful chapter in our history.

While media like CNN have reported that Webb “isn’t mentioned in most government records or sources” about these investigations, historian David Johnson noted in 2004 that Webb had met with President Truman on the issue of gay officials when he served in the State Department.

Prescod-Weinstein and her co-authors put forward a petition to change the name. In addition to references to the “Lavender Scare,” they noted one particular case:

Notably, in the case Norton v. Macy, former NASA employee Clifford L. Norton sued for “review of his discharge for ‘immoral conduct’ and for possessing personality traits which render him ‘unsuitable for further Government employment.’”

Even though the Norton v. Macy case rose to prominence in 1969, the actual incident that led to Norton’s dismissal took place in 1963 while James Webb was NASA administrator. Norton was arrested by DC police after having been observed speaking with another man, and was brought in for questioning on suspicion of homosexuality. While at the police station, NASA Security Chief Fugler was summoned to the police station, where he participated in Norton’s interrogation. Upon Norton’s release by DC police, NASA Security Chief Fugler then took Norton to NASA Headquarters, where he continued to interrogate him until the following morning. NASA subsequently fired Norton for suspicion of homosexuality, based on activities he was suspected of conducting during his personal time. We do not know of any consequences for NASA Security Chief Fugler, who conducted an extrajudicial interrogation on federal property.

It was government policy at that time that you could not hold a clearance or work in sensitive areas if you were a homosexual. Some 1700 people signed the petition to remove Webb’s name without any further investigation.

NASA ultimately did conduct an investigation. In October, the agency reported that:

“NASA’s History Office conducted an exhaustive search through currently accessible archives on James Webb and his career,” the agency told CNN in October. “They also talked to experts who previously researched this topic extensively. NASA found no evidence at this point that warrants changing the name of the James Webb Space Telescope.”

That did not sit well with Prescod-Weinstein or others. They ultimately focused their ire on Oluseyi who was asked to study that history. He said that he was initially “sympathetic” to these claims but that, after researching the actual records, he wrote in Medium that “I can say conclusively that there is zero evidence that Webb is guilty of the allegations against him.”

I can understand that some may contest those findings. However, what followed was a cancel campaign that shifted to those who opposed it, particularly Oluseyi. Prescod-Weinstein insisted that NASA assigned Oluseyi to “impugn” her concerns and to provide a “shield” for Webb.

Others did not care what the investigation found. The New York Times reported that the Britain’s Royal Astronomical Society declared that “no astronomer who submits a paper to its journals should type the words ‘James Webb.'” The American Astronomical Society, and the publications Nature, New Scientist and Scientific American have also reportedly declared the case closed against Webb.

Oluseyi was soon also tagged and said that he was been unable to have letters published in the journal that attempted to point out the allegedly flawed evidence cited by Prescod-Weinstein and others. So, not only are journals declaring the matter effectively closed, but they will not allow readers to see opposing views.

Even former colleagues publicly denounced Oluseyi. George Mason’s Peter Plavchan, who said that he welcomed Oluseyi to that school as a visiting professor, tweeted a note to Prescod-Weinstein that “I do believe [Oluseyi] owes you and LGBTQ+ astronomers an apology.”

Yet, other academics have raised concerns over this intolerant and anti-intellectual response.

David Johnson teaches history at the University of South Florida and is the author of “The Lavender Scare: The Cold War Persecution of Gays and Lesbians in the Federal Government.” He objected that Prescod-Weinstein and others “ignore the historical context.” He further noted that “Mr. Webb did not lead efforts to oust gays; there was not yet a gay rights movement in 1949; and to apply the term homophobe is to use a word out of time and reflects nothing Mr. Webb is known to have written or said “No one in government could stand up at that time and say ‘This is wrong.’ And that includes gay people.”

The campaign is continuing to rename the telescope. Prescod-Weinstein wants the telescope be named for Harriet Tubman. She insisted in a CNN interview that

“There are people who have argued that Harriet Tubman wasn’t a ‘real scientist.’ But to do science is to apply rational knowledge of the physical world. Harriet Tubman represents the best of humanity, and we should be sending the best of what we have to offer into the sky.”

Prescod-Weinstein, who publicly identifies as “all #BLACKandSTEM/all Jewish. queer/agender/woman,” has previously been herself the subject of controversy with critics noting that she previously argued that “antisemitism by black people are due to the influence of white gentiles: “White Jews adopted whiteness as a social praxis and harmed Black people in the process […] Some Black people have problematically blamed Jewishness for it.”

In the end, being a free speech advocate means that you support all of these figures in this debate. I would be equally opposed to efforts to seek to fire or cancel Dr. Prescod-Weinstein.

This is the type of debate that was once welcomed on campuses and in academic journals. Reasonable minds can disagree on the underlying facts and their meaning. What concerns me is that, despite a division of opinion among academics, there is yet another cancel campaign that will not tolerate opposing views to be voiced.

It is, unfortunately, all too familiar today. Cancel campaigns have become a type of academic credential. It is not enough to disagree with a fellow academic. You must now seek to silence him or fire him. If you believe in a cause, anyone voicing an opposing position is now viewed as intolerable.

In my brief review of these articles, there does not appear to be much in terms of direct evidence against Webb. I am certainly open to allegations based on new evidence, but it is less likely that we will see such discussions after the treatment of Dr. Oluseyi and others who have raised objections.

Wednesday, December 21, 2022

The FBI’s Role in Suppressing the Hunter Laptop Story

 Here is Jonathan Turley on the FBI's politicization in favor of the Democrats.

There is no question that there was election fraud in 2020. The only issue is what kinds. No doubt both the Democrats and Republicans have engaged in election fraud of one kind or another. However, the Democrats have proved far more "effective" at it than the Republicans - hence are far more dangerous.

At this point, it is clear that justice in the US is one-sided and politicized in favor of the Democrats.

The current widespread lack of ethics in our society reflects the active cooperation of many in the Media, Big Tech, and Education - the latter having become, to an alarming extent, an indoctrination establishment for Progressive and other unworthy ideas.

Our freedom and justice already have been substantially damaged by unethical partisans. I don't see it getting any better. Too many voters are uninformed, intolerant, and would like nothing better than to force others to "behave and believe properly".

Here is JT's comment.

---------------------------------------------

“Probing & Pushing Everywhere”: New Twitter Releases Confirm the FBI’s Role in Suppressing the Hunter Laptop Story

Below is my column on Fox.com on the most recent release of Twitter files detailing the FBI’s direct involvement in the targeting and censoring of citizens. The most notable aspect is the effort by the FBI to censor references to the Hunter Biden scandal before the 2020 election. Here is the column:

“They are probing & pushing everywhere.” That line sums up a increasingly alarming element in the seventh installment of the so-called “Twitter files.” “They” were the agents of Federal Bureau of Investigation and they were pushing for the censorship of citizens in an array of stories.

Writer Michael Shellenberger added critical details on how the FBI was directly engaged in censorship at the company. However, this batch of documents contains a particularly menacing element to the FBI/Twitter censorship alliance. The documents shows what writer Shellenberger described as a concentrated effort “to discredit leaked information about Hunter Biden before and after it was published.”

Twitter has admitted that it made a mistake in blocking the Hunter laptop story. after roughly two years, even media that pushed the false “Russian disinformation” claims have acknowledged that the laptop is authentic.

Yet, those same networks and newspapers are now imposing a new de facto blackout on covering the details of the Twitter files on the systemic blacklisting, shadow banning, and censorship carried out in conjunction with the government.

The references to the new Hunter Biden evidence were also notable in the dates of these backchannel communications. On October 13, weeks before the election, FBI Special Agent Elvis Chan sent 10 documents to Twitter’s then-Head of “Trust & Safety” Yoel Roth related to the Biden story. It was the next day that New York Post ran its story on the laptop and its incriminating content. The United States government played a key role trying to bury a story damaging to the Democrats before the election.

The Twitter files now substantiate the earlier allegations of “censorship for surrogate” or proxy. While the First Amendment applies to the government, it can also apply to agents of the government. Twitter itself now admits that it acted as an agent in these efforts.

The current media blackout on the Twitter files story only deepens these concerns. For years, media figures have denied Twitter was engaging in censorship, blacklisting, shadow banning and other techniques targeting conservatives. The release of the files have shattered those denials. There is simply no further room for censorship apologists.

In a city that relies on “plausible deniability,” there is no longer a plausible space left in the wake of the releases. All that remains is silence — the simple refusal to acknowledge the government-corporate alliance in this massive censorship system.

To cover the story is to admit that the media also followed the same course as Twitter in hampering any discussion of this influence peddling scandal. Indeed, while media is now forced to admit that the laptop is authentic, it cannot get itself to address the authentic emails contained in that laptop. Those emails detail millions of dollars in influence peddling by the Biden family. They also detail the knowledge and involvement of Joe Biden despite his repeated denial of any knowledge of the deals.

Those files also raise potential criminal acts that some of us have been writing about for two years. The emails are potentially incriminating on crimes ranging from tax violations to gun violations. In the very least, t is a target rich environment for investigators or prosecutors.

Yet, earlier disclosures showed that key FBI figures tamped down any investigation into the laptop. The latest documents now show the FBI also actively pressured the media to kill the story. That raises deeply troubling questions of the FBI politicalization. After Watergate, the Congress moved aggressively to pursue the use of the bureau by a president for political purposes. There is little call from the media for such an investigation today when the bureau is accused of working for Democratic rather than Republican interests.

The record of such bias extends beyond the Twitter files. In the prior years, FBI agents were found to have shown overt political bias in the handling of FBI investigation. The agency continued to rely on sources like the Steele dossier despite warnings that the Clinton-funded report was likely Russian disinformation. Yet, when it came to Hunter Biden, the FBI reportedly was not interested in aggressively pursuing an investigation while calling on social media companies to censor any discussion of the scandal before the election. It continued to do so despite Twitter executives “repeatedly” indicating there was “very little” Russian activity on the platform.

In January 2020, Twitter’s then director of policy and philanthropy, Carlos Monje Jr., expressed unease on the pressure coming from the FBI and said “They are probing & pushing everywhere they can (including by whispering to congressional staff).”

The question is why the FBI would be “probing & pushing everywhere” despite the fact that the Russian investigation had exposed prior bias related to the 2016 election. That was no deterrent to killing a story viewed as damaging to the Biden campaign.

In the end, the government-corporate alliance failed. Despite the refusal of many in the media to cover the Twitter files, nearly two-thirds of voters believe Twitter shadow-banned users and engaged in political censorship during the 2020 election. Seventy percent of voters want new national laws protecting users from corporate censorship.

It is clear that any such reforms should include a full investigation of the FBI and its involvement in censorship efforts. As many as 80 agents reportedly were committed to this effort. It is clear now that, if we are to end censorship by surrogate, the House will have to “probe and push everywhere” in the FBI for answers.

Sunday, December 11, 2022

Judith Curry explains what is wrong with IPCC and climate alarmism

 Here is a link to a video of an interview with Judith Curry - a top climate scientist.

JC makes clear that the widespread push portray climate change as existential is politics not science.

Friday, December 02, 2022

“Colorful Fluid dynamics” and overconfidence in global climate models

 Here is David Young at Judith Curry's blog.

David Young received a PhD in mathematics in 1979 from the University of Colorado-Boulder. After completing graduate school, Dr. Young joined the Boeing Company and has worked on a wide variety of projects involving computational physics, computer programming, and numerical analysis. His work has has been focused on the application areas of aerodynamics, aeroelastics, computational fluid dynamics,airframe design, flutter, acoustics, and electromagnetics. To address these applications, he has done original theoretical work in high performance computing, linear potential flow and boundary integral equations, nonlinear potential flow, discretizations for the Navier-Stokes equations, partial differential equations and the finite element method, preconditioning methods for large linear systems, Krylov subspace methods for very large nonlinear systems, design and optimization methods, and iterative methods for highly nonlinear systems.

The moral of his story is (as I see it - and only slightly exaggerated):

  • The Global Climate Models (GCM) that the alarmists, media, politicians, etc. rely on for their uninformed comments are not accurate and cannot be relied upon and do not justify climate alarmism.
  • The climate alarmists do not know what they are talking about.
  • Many climate scientists - including some of the GCM creators - do not know what they are talking about - being unaware of the mathematical issues DY discusses.
  • Some climate scientists do know that the GCMs are unreliable because they are unaware of the mathematical issues DY discusses.
  • Some climate scientists - the ones that do understand the mathematical issues that DY discusses - are dishonest about the GCMs' reliability because it is in their personal interest to be so.
  • Any one who uses the term "climate denier" in an attack mode is wither ignorant about the issues or dishonest.
Here is the article.
----------------------------------------
This post lays out in fairly complete detail some basic facts about Computational Fluid Dynamics (CFD) modeling. This technology is the core of all general circulation models of the atmosphere and oceans, and hence global climate models (GCMs). I discuss some common misconceptions about these models, which lead to overconfidence in these simulations. This situation is related to the replication crisis in science generally, whereby much of the literature is affected by selection and positive results bias.

A full-length version of this article can be found at [ lawsofphysics1 ], including voluminous references. See also this publication [ onera ]

1 Background

Numerical simulation over the last 60 years has come to play a larger and larger role in engineering design and scientific investigations. The level of detail and physical modeling varies greatly, as do the accuracy requirements. For aerodynamic simulations, accurate drag increments between configurations have high value. In climate simulations, a widely used target variable is temperature anomaly. Both drag increments and temperature anomalies are particularly difficult to compute accurately. The reason is simple: both output quantities are several orders of magnitude smaller than the overall absolute levels of momentum for drag or energy for temperature anomalies. This means that without tremendous effort, the output quantity is smaller than the numerical truncation error. Great care can sometimes provide accurate results, but careful numerical control over all aspects of complex simulations is required.

Contrast this with some fields of science where only general understanding is sought. In this case qualitatively interesting results can be easier to provide. This is known in the parlance of the field as “Colorful Fluid Dynamics.” While this is somewhat pejorative, these simulations do have their place. It cannot be stressed too strongly however that even the broad “patterns” can be quite wrong. Only after extensive validation can such simulations be trusted qualitatively, and even then only for the class of problems used in the validation. Such a validation process for one aeronautical CFD code consumed perhaps 50-100 man years of effort in a setting where high quality data was generally available. What is all too common among non-specialists is to conflate the two usage regimes (colorful versus validated) or to make the assumption that realistic looking results imply quantitatively meaningful results.

The first point is that some fields of numerical simulation are very well founded on rigorous mathematical theory. Two that come to mind are electromagnetic scattering and linear structural dynamics. Electromagnetic scattering is governed by Maxwell’s equations which are linear. The theory is well understood, and very good numerical simulations are available. Generally, it is possible to develop accurate methods that provide high quality quantitative results. Structural modeling in the linear elasticity range is also governed by well posed elliptic partial differential equations.

2 Computational Fluid Dynamics

The Earth system with its atmosphere and oceans is much more complex than most engineering simulations and thus the models are far more complex. However, the heart of any General Circulation Model (GCM) is a “dynamic core” that embodies the Navier-Stokes equations. Primarily, the added complexity is manifested in many subgrid models of high complexity. However, at some fundamental level a GCM is computational fluid dynamics. In fact GCM’s were among the first efforts to solve the Navier-Stokes equations and many initial problems were solved by the pioneers in the field, such as the removal of sound waves. There is a positive feature of this history in that the methods and codes tend to be optimized quite well within the universe of methods and computers currently used. The downside is that there can be a very high cost to building a new code or inserting a new method into an existing code. In any such effort, even real improvements will at first appear to be inferior to the existing technology. This is a huge impediment to progress and the penetration of more modern methods into the codes.

The best technical argument I have heard in defense of GCM’s is that Rossby waves are vastly easier to model than aeronautical flows where the pressure gradients and forcing can be a lot higher. There is some truth in this argument. The large-scale vortex evolution in the atmosphere on shorter time scales is relatively unaffected by turbulence and viscous effects, even though at finer scales the problem is ill-posed. However, there are many other at least equally important components of the earth system. An important one is tropical convection, a classical ill-posed problem because of the-large scale turbulent interfaces and shear layers. While usually neglected in aeronautical calculations, free air turbulence is in many cases very large in the atmosphere. However, it is typically neglected outside the boundary layer in GCMs. And of course there are clouds, convection and precipitation, which have a very significant effect on overall energy balance. One must also bear in mind that aeronautical vehicles are designed to be stable and to minimize the effects of ill-posedness, in that pathological nonlinear behaviors are avoided. In this sense aeronautical models may be actually easier to model than the atmosphere. In any case aeronautical simulations are greatly simplified by a number of assumptions, for example that the onset flow is steady and essentially free of atmospheric turbulence. Aeronautical flows can often be assumed to be essentially isentropic outside the boundary layer.

As will be argued below, the CFD literature is affected by positive results and selection bias. In the last 20 years, there has been increasing consciousness of and documentation of the strong influence that biased work can have on the scientific literature. It is perhaps best documented in the medical literature where the scientific communities are very large and diverse. These biases must be acknowledged by the community before they can be addressed. Of course, there are strong structural problems in modern science that make this a difficult thing to achieve.

Fluid Dynamics is a much more difficult problem than electromagnetic scattering or linear structures. First many of the problems are ill posed or nearly so. As is perhaps to be expected with nonlinear systems, there are also often multiple solutions. Even in steady RANS (Reynolds Averaged Navier-Stokes) simulations there can be sensitivity to initial conditions or numerical details or gridding. The AIAA Drag Prediction Workshop Series has shown the high levels of variability in CFD simulations even in attached mildly transonic and subsonic flows. These problems are far more common than reported in the literature.

Another problem associated with nonlinearity in the equations is turbulence, basically defined as small scale fluctuations that have random statistical properties. There is still some debate about whether turbulence is completely represented by accurate solutions to the Navier-Stokes equations, even though most experts believe that it is. But the most critical difficulty is the fact that in most real life applications the Reynolds number is high or very high. The Reynolds number represents roughly the ratio of inertial forces to viscous forces. One might think if the viscous forcing was 4 to 7 orders of magnitude smaller than the inertial forcing (as it is for example in many aircraft and atmospheric simulations), it could be neglected. Nothing could be further from the truth. The inclusion of these viscous forces often results in an O(1) change in even total forces. Certainly, the effect on smaller quantities like drag is large and critical to successful simulations in most situations. Thus, most CFD simulations are inherently numerically difficult and simplifications and approximations are required. There is a vast literature on these subjects going back to the introduction of the digital computer; John Von Neumann made some of the first forays into understanding the behaviour of discrete approximations.

The discrete problem sizes required for modeling fluid flows by resolving all the relevant scales grow as Reynolds number to the power 9/4 in the general case, assuming second order numerical discretizations. Computational effort grows at least linearly with discrete problem size multiplied by the number of time steps. Time steps must also decrease as the spatial grid is refined because of the stability requirements of the Courant-Freidrichs-Levy condition as well as to control time discretization errors. The number of time steps grows as Reynolds number to the power 3/4. Thus overall computational effort grows with Reynolds number to the power 3. Thus, for almost all problems of practical interest, it is computationally impossible (and will be for the forseeable future) to resolve all the important scales of the flow and so one must resort to subgrid models of fluctuations not resolved by the grid. For many idealized engineering problems, turbulence is the primary effect that must be so modeled. In GCMs there are many more, such as clouds. References are given in the full paper for some other views that may not fully agree with the one presented here in order to give people a feel for the range of opinion in the field.

For modeling the atmosphere, the difficulties are immense. The Reynolds numbers are high and the turbulence levels are large but highly variable. Many of the supposedly small effects must be neglected based on scientific judgment. There are also large energy flows and evaporation and precipitation and clouds, which are all ignored in virtually all aerodynamic simulations for example. Ocean models require different methods as they are essentially incompressible. This in some sense simplifies the underlying Navier-Stokes equations but adds mathematical difficulties.

2.1 The Role of Numerical Errors in CFD

Generally, the results of many steady state aeronautical CFD simulations are reproducible and reliable for thin boundary and shear layer dominated flows by assuming little flow separation and subsonic flow. There are now a few codes that are capable of demonstrating grid convergence for the simpler geometries or lower Reynolds numbers. However, many of these simulations make many simplifying assumptions and uncertainty is much larger for separated or transonic flows.

The contrast with climate models speaks for itself. Typical grid spacings in climate models are often exceed 100 km and their vertical grid resolution is almost certainly inadequate. Further many of the models use spectral methods that are not fully stable. Various forms of filtering are used to remove undesirable oscillations. Further, the many subgrid models are solved sequentially, adding another source of numerical errors and making tuning problematic.

2.2 The Role of Turbulence and Chaos in Fluid Mechanics

In this section I describe some well verified science from fluid mechanics that govern all Navier-Stokes simulations and that must inform any non-trivial discussion of weather or climate models. One of the problems in climate science is lack of fundamental understanding of these basic conclusions of fluid mechanics or (as perhaps the case may be for some) a reluctance to discuss the consequences of this science.

Turbulence models have advanced tremendously in the last 50 years and climate models do not use the latest of these models, so far as I can tell. Further, for large-scale vortical 3D flow, turbulence models are quite inadequate. Nonetheless, proper modeling of turbulence by solving auxiliary differential equations is critical to achieving reasonable accuracy.

Just to give one fundamental problem that is a showstopper at the moment: how to control numerical error in any time accurate eddy resolving simulation. Classical methods fail. How can one tune such a model? You can tune it for a given grid and initial condition, but that tuning might fail on a finer grid or with different initial conditions. This problem is just now beginning to be explored and is of critical importance for predicting climate or any other chaotic flow.

When truncation errors are significant (as they are in most practical fluid dynamics simulations particularly climate simulations), there is a constant danger of “overtuning” subgrid models, discretization parameters or the hundreds of other parameters. The problem here is that tuning a simulation for a few particular cases too accurately is really just getting large errors to cancel for these cases. Thus skill will actually be worse for cases outside the tuning set. In climate models the truncation errors are particularly large and computation costs too high to permit systematic study of the size of the various errors. Thus tuning is problematic.

2.3 Time Accurate Calculations – A Panacea?

All turbulent flows are time dependent and there is no true steady state. However, using Reynolds averaging, one can separate the flow field into a steady component and a hopefully small component consisting of the unsteady fluctuations. The unsteady component can then be modeled in various ways. The larger the truly unsteady component is, the more challenging the modeling problem becomes.

One might be tempted to always treat the problem as a time dependent problem. This has several challenges, however. At least in principle (but not always in practice) one should be able to use conventional numerical consistency checks in the steady state case. For example, one can check grid convergence, calculate sensitivities for parameters cheaply using linearizations, and use the residual as a measure of reliability. For the Navier-Stokes equations, there is no rigorous proof that the infinite grid limit exists or is unique. In fact, there is strong evidence for multiple solutions, some corresponding to states seen in testing, and others not. All these conveniences are either inapplicable to time accurate simulations or are much more difficult to assess.

Time accurate simulations are also challenging because the numerical errors are in some sense cumulative, i.e., an error at a given time step will be propagated to all subsequent time steps. Generally, some kind of stability of the underlying continuous problem is required to achieve convergence. Likewise a stable numerical scheme is helpful.

For any chaotic time accurate simulation, classical methods of numerical error control fail. Because the initial value problem is ill-posed, the adjoint diverges. This is a truly daunting problem. We know numerical errors are cumulative and can grow nonlinearly, but our usual methods are completely inapplicable.

For chaotic systems, the main argument that I have heard for time accurate simulations being meaningful is “at least there is an attractor.” The thinking is that if the attractor is sufficiently attractive, then errors in the solution will die off or at least remain bounded and not materially affect the time average solution or even the “climate” of the solution. The solution at any given time may be wildly inaccurate in detail as Lorenz discovered, but the climate will (according to this argument) be correct. At least this is an argument that can be developed and eventually quantified and proven or disproven. Paul Williams has a nice example of the large effect of the time step on the climate of the Lorentz system. Evidence is emerging of a similar effect due to spatial grid resolution for time accurate Large Eddy Simulations and a disturbing lack of grid convergence. Further, the attractor may be only slightly attractive and there will be bifurcation points and saddle points as well. And, the attractor can be of very high dimension, meaning that tracing out all its parts could be computationally a monumental if not impossible task. So far, the bounds on attractor dimension are very large. My suggestion would be to develop and fund a large long term research effort in this area with the best minds in the field of nonlinear theory. Theoretical understanding may not be adequate at the present time to address it computationally. There is some interesting work by Wang at MIT on shadowing that may eventually be computationally feasible that could address some of the stability issues for the long-term climate of the attractor. For the special case of periodic or nearly periodic flows, another approach that is more computationally tractable is windowing. This problem of time accurate simulations of chaotic systems seems to me to be a very important unsolved question in fundamental science and mathematics and one with tremendous potential impact across many fields.

While climate modelers Palmer and Stevens’ 2019 short perspective note (see full paper for the reference) is an excellent contribution by two unusually honest scientists, there is in my opinion reason for skepticism about their proposal to make climate models into eddy resolving simulations. Their assessment of climate models is in my view mostly correct and agrees with the thrust of this post, but there are a host of theoretical issues to be resolved before casting our lot with largely unexplored simulation methods that face serious theoretical challenges. Dramatic increases in resolution are obviously sorely needed in climate models and dramatic improvements may be possible in subgrid models once resolution is improved. Just as an example, modern PDE based models may make a significant difference. I don’t think anyone knows the outcomes of these various steps toward improvement.

3 The “Laws of Physics”

The “laws of physics” are usually thought of as conservation laws, the most important being conservation of mass, momentum, and energy. The conservation laws with appropriate source terms for fluids are the Navier- Stokes equations. These equations correctly represent the local conservation laws and offer the possibility of numerical simulations. This is expanded on in the full paper.
3.1 Initial Value Problem or Boundary Value Problem?

One often hears that “the climate of the attractor is a boundary value problem” and therefore it is predictable. This is nothing but an assertion with little to back it up. And of course, even assuming that the attractor is regular enough to be predictable, there is the separate question of whether it is computable with finite computing time. It is similar to the folk doctrine that turbulence models convert an ill-posed time dependent problem into a well posed steady state one. This doctrine has been proven to be wrong – as the prevalence of multiple solutions discussed above shows. However, those who are engaged in selling CFD have found it attractive despite its unscientific and effectively unverifiable nature.

A simple analogy for the climate system might be a wing as Nick Stokes has suggested. As pointed out above, the drag for a well-designed wing is in some ways a good analogy for the temperature anomaly of the climate system. The climate may respond linearly to changes in forcings over a narrow range. But that tells us little. To be useful, one must know the rate of response and the value (the value of temperature is important for example for ice sheet response). These are strongly dependent on details of the dynamics of the climate system through nonlinear feedbacks.

Many use this analogy to try to transfer the credibility [not fully deserved] from CFD simulations of simple systems to climate models or other complex separated flow simulations. This is not a correct implication. In any case, even simple aeronautical simulations can have very high uncertainty when used to simulate challenging flows.

3.2 Turbulence and SubGrid Models

Subgrid turbulence models have advanced tremendously over the last 50 years. The subgrid models must modify the Navier-Stokes equations if they are to have the needed effect. Turbulence models typically modify the true fluid viscosity by dramatically increasing it in certain parts of the flow, e.g., a boundary layer. The problem here is that these changes are not really based on the “laws of physics”, and certainly not on the conservation laws. The models are typically based on assumed relationships that are suggested by limited sets of test data or by simply fitting available test data. They tend to be very highly nonlinear and typically make an O(1) difference in the total forces. As one might guess, this area is one where controversy is rife. Most would characterize this as a very challenging problem, in fact one that will probably never be completely solved, so further research and controversy is a good thing.

Negative results about subgrid models have begun to appear. One recent paper shows that cloud microphysics models have parameters that are not well constrained by data. Using plausible values, ECS (equilibrium climate sensitivity) can be “engineered” over a significant range. Another interesting result shows that model results can depend strongly on the order chosen to solve the numerous subgrid models in a given cell. In fact, the subgrid models should be solved simultaneously so that any tuning is more independent of numerical details of the methods used. This is a fundamental principle of using such models and is the only way to ensure that tuning is meaningful. Indeed, many metrics for skill are poorly replicated by current generation climate models, particularly regional precipitation changes, cloud fraction as a function of latitude, Total Lower Troposphere temperature changes compared to radiosondes and satellite derived values, tropical convection aggregation and Sea Surface Temperature changes, just to name a few. This lack of skill for SST changes seems to be a reason why GCM model-derived ECS is inconsistent with observationally constrained energy balance methods.

Given the large grid spacings used in climate models, this is not surprising. Truncation errors are almost certainly larger than the changes in energy flows that are being modeled. In this situation, skill is to be expected only on those metrics involved in tuning (either conscious or subconscious) or metrics closely associated with them. In layman’s terms, those metrics used in tuning come into alignment with the data only because of cancellation of errors.

One can make a plausible argument for why models do a reasonable job of replicating the global average surface temperature anomaly. The models are mostly tuned to match top of atmosphere radiation balance. If their ocean heat uptake is also consistent with reality (and it seems to be pretty close) and if the models conserve energy, one would expect the average temperature to be roughly right even if it is not explicitly used for tuning. However, this apparent skill does not mean that other outputs will also be skillful.

This problem of inadequate tuning and unconscious bias plagues all application areas of CFD. A typical situation involves a decades long campaign of attempts to apply a customer’s favorite code to an application problem (or small class of problems). Over the course of this campaign many, many combinations of gridding and other parameters are “tried” until an acceptable result is achieved. The more challenging issue of establishing the limitations of this acceptable “accuracy” for different types of flows is often neglected because of lack of resources. Thus, the cancellation of large numerical errors is never quantified and remains hidden, waiting to emerge when a more challenging problem is attempted.

3.3 Overconfidence and Bias

As time passes, the seriousness of the bias issue in science continues to be better documented and understood. One recent example quotes one researcher as saying “Loose scientific methods are leading to a massive false positive bias in the literature.” Another study states:

“Poor research design and data analysis encourage false-positive findings. Such poormethods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science.”

In less scholarly settings, these results are typically met with various forms of rationalization. Often we are told that “the fundamentals are secure” or “my field is different” or “this affects only the medical fields.” To those in the field, however, it is obvious that strong positive bias affects the Computational Fluid Dynamics literature for the reasons described above and that practitioners are often overconfident.

This overconfidence in the codes and methods suits the perceived self-interest of those applying the codes (and for a while suited the interests of the code developers and researchers), as it provides funding to continue development and application of the models to ever more challenging problems. Recently, this confluence of interests has been altered by an unforeseen consequence, namely laymen who determine funding have come to believe that CFD is a solved problem and hence have dramatically reduced the funding stream for fundamental development of new methods and also for new theoretical research. This conclusion is an easy one for outsiders to reach given the CFD literature, where positive results predominate even though we know the models are just wrong both locally and globally for large classes of flows, for example strongly separated flows. Unfortunately, this problem of bias is not limited to CFD, but I believe is common in many other fields that use CFD modeling as well.

Another rationalization used to justify confidence in models are appeals to the “laws of physics” as discussed above. These appeals however omit a very important source of uncertainty and seem to provide a patina of certainty covering a far more complex reality.

Another corollary of the doctrine of the “laws of physics” is the idea that “more physics” must be better. Thus, simple models that ignore some feedbacks or terms in the equations are often maligned. This doctrine also suits the interest of some in the community, i.e., those working on more complex and costly simulations. It is also a favored tactic of Colorful Fluid Dynamics to portray the ultimately accurate simulation as just around the corner if we get all the “physics” included and use a sufficiently massive parallel computer. This view is not an obvious one when critically examined. It is widely held however among both people who run and use CFD results and those who fund CFD.

3.4 Further Research

So what is the future of such simulations and GCMs? As attempts are made to use them in areas where public health and safety are at stake, estimating uncertainty will become increasingly important. Items deserving attention in my opinion are discussed in some detail in the full paper, posted here on Climate Etc. I would argue that the most important elements needing attention, both in CFD and in climate and weather modeling, are new theoretical work and insights and the development of more accurate data. The latter work is not glamorous and the former can entail career risks. These are hard problems. and in many cases, a particular line of enquiry will not yield anything really new.

The dangers to be combatted include:It is critical to realize that the literature is biased and that replication failures are often not published.
We really need to escape from the elliptic boundary value problem (well posed) mental model that are held by so many with a passing familiarity with the issues. A variant of this mental model one encounters in the climate world is the doctrine of “converting an initial value problem to a boundary value problem.” This just confuses the issue, which is really about the attractor and its properties. The methods developed for well-posed elliptic problems have been pursued about as far as they will take us. However, this mental model can result in dramatic overconfidence in models in CFD.
A corollary of the “boundary value problem” misnomer is the idea that “If I run the model right, the answer will be right” mental model. This is patently false and even dangerous, however, it gratifies egos and aids in marketing.

4 Conclusion

I have tried to lay out in summary form some of the issues with high Reynolds number fluid simulations and to highlight the problem of overconfidence as well as some avenues to try to fundamentally advance our understanding. Laymen need to be aware of the typical tactics of the dark arts of “Colorful Fluid Dynamics” and “science communication.” It is critical to realize that much of the literature is affected by selection and positive results bias. This is something that most will admit privately, but is almost never publicly discussed.

How does this bias come about? An all too common scenario is for a researcher to have developed a new code or a new feature of an old code or to be trying to apply an existing code or method to a particular test case of interest to a customer. The first step is to find some data that is publicly available or obtain customer supplied data. Much of the older and well documented experiments involve flows that are not tremendously challenging. One then runs the code or model (adjusting grid strategies, discretization and solver methodologies, and turbulence model parameters or methods) until the results match the data reasonably well. Then the work often stops (in many cases because of lack of funding or lack of incentives to draw more scientifically balanced conclusions) and is published. The often large number of runs with different parameters that provided less convincing results are explained as due to “bad gridding,” “inadequate parameter tuning,” “my inexperience in running the code,” etc. The supply of witches to be burned is seemingly endless. These rationalizations are usually quite honest and sincerely believed, but biased. They are based on a cultural bias that if the model is “run right” then the results will be right, if not quantitatively, then at least qualitatively. As we saw above, those who develop the models themselves know this to be incorrect as do those responsible for using the simulations where public safety is at stake. As a last resort one can always point to any deficiencies in the data or for the more brazen, simply claim the data is wrong since it disagrees with the simulation. The far more interesting and valuable questions about robustness and uncertainty or even structural instability in the results are often neglected. One logical conclusion to be drawn from the perspective by Palmer and Stevens calling for eddy resolving climate models is that the world of GCM’s is little better. However, this paper is a hopeful sign of a desire to improve and is to be strongly commended.

This may seem a cynical view, but it is unfortunately based on practices in the pressure filled research environment that are all too common. There is tremendous pressure to produce “good” results to keep the funding stream alive, as those in the field well know. Just as reported in medically related fields, replication efforts for CFD have often been unsuccessful, but almost always go unpublished because of the lack of incentives to do so. It is sad to have to add that in some cases, senior people in the field can suppress negative results. Some way needs to be found to provide incentives for honest and objective replication efforts and publishing those findings regardless of the opinions of the authors of the method. Priorities somehow need to be realigned toward more scientifically valuable information about robustness and stability of results and addressing uncertainty.

However, I see some promising signs of progress in science. In medicine, recent work shows that reforms can have dramatic effects in improving the quality of the literature. There is a growing recognition of the replication crisis generally and the need to take action to prevent science’s reputation with the public from being irreparably damaged. As simulations move into the arena affecting public safety and health, there will be hopefully increasing scrutiny, healthy skepticism, and more honesty. Palmer and Stevens’ recent paper is an important (and difficult in the politically charged climate field) step forward on a long and difficult road to improved science.

In my opinion those who retard progress in CFD are often involved in “science communication” and “Colorful Fluid Dynamics.” They sometimes view their job as justifying political outcomes by whitewashing high levels of uncertainty and bias or making the story good click bait by exaggerating. Worse still, many act as apologists for “science” or senior researchers and tend to minimize any problems. Nothing could be more effective in producing the exact opposite of the desired outcome, viz., a cynical and disillusioned public already tired of the seemingly endless scary stories about dire consequences often based on nothing more than the pseudo-science of “science communication” of politically motivated narratives. This effect has already played out in medicine where the public and many physicians are already quite skeptical of health advice based on retrospective studies, biased reporting, or slick advertising claiming vague but huge benefits for products or procedures. Unfortunately, bad medical science continues to affect the health of millions and wastes untold billions of dollars. The mechanisms for quantifying the state of the science on any topic, and particularly estimating the often high uncertainties, are very weak. As always in human affairs, complete honesty and directness is the best long term strategy. Particularly for science, which tends to hold itself up as having high authority, the danger is in my view worth addressing urgently. This response is demanded not just by concerns about public perceptions, but also by ethical considerations and simple honesty as well as a regard for the lives and well-being of the consumers of our work who deserve the best information available.

Saturday, November 26, 2022

Disparity Doesn’t Necessarily Imply Racism

 Roland Fryer in the Wall Street Journal.

Mr. Fryer is a professor of economics at Harvard, a fellow at the Manhattan Institute, and founder of Equal Opportunity Ventures.

I suspect that Mr. Fryer is correct. I also fear that the current divisiveness in our society may lead to the kind of racism and similar bias among other groups that Mr. Fryer tested for.

Here is his article.
---------------------------------------
I was raised, in part, by my paternal grandmother—a phenomenal black woman born in 1925 who came of age during Jim Crow, attended Bethune-Cookman University in the early 1940s, and experienced both the promise and limitations of the civil-rights era when integrating schools in Florida in 1969. She did her best to teach sixth-graders subject-verb agreement minutes after being spat on by their parents. Her life’s journey provided unlimited content as we sat together for nearly three decades, stuck to the plastic slipcovers on her sofa, playing cards, drinking sweet tea, and talking uninhibitedly about race in America.

The first discussion I can remember happened in 1988, when I was 11, after a visit to McDonald’s. After ordering, my grandmother paid with a crisp $20 bill from her pocketbook, and the cashier put the change directly on the counter. When we got to the parking lot, she was incensed. “You see that? White woman didn’t want to touch me.” I had noticed it too, but thought the cashier was being nice, trying to avoid passing on her own germs.

My grandmother—no doubt based in part on her experiences—saw racism everywhere, in every inequity, every statistic. Racial differences in wages? Racism. Racial differences in educational achievement? Racism. Racial differences in teen birth rates? Racism. This sort of casual empiricism—which has crept back into mainstream media and other institutions—was a competitive sport among my family and friends. Did you see the way that white woman tightened her grip on her purse, because I was behind her? Does this guy follow everyone around the store?

A decade after the McDonald’s incident, in graduate school, I read a 1995 paper titled “The Role of Premarket Factors in Black-White Wage Differences.” Using a nationally representative sample of more than 12,000 14- to 17-year-olds from 1979, Derek A. Neal and William R. Johnson estimated that blacks earned between 35% to 45% less than whites on average.

They examined how much of the wage gap could be attributed to present-day discrimination in the workplace versus differences in skill as measured by the Armed Forces Qualification Test, a cousin of the ACT and SAT. Importantly, they weren’t trying to measure the effect of America’s racist history running back to the early 17th century, when the first African slaves were brought to work the tobacco farms of Virginia, only the extent to which blacks earn lower incomes because employers today make racially discriminatory decisions. Nor did they focus on prejudice more broadly: They ignored that my grandmother was spat on in the parking lot at work and concentrated on whether she was treated fairly once she entered the building.

“We find,” they wrote in the abstract of their paper, “that this one test score explains all of the black-white wage gap for young women and much of the gap for young men.” With their approach, antiblack bias played no role in the divergent wages among women; a black woman with the same qualifications as a white woman made slightly more money. And it accounted for at most 29% of the racial difference among men, with 71% traceable to disparate performance on the AFQT. The AFQT itself was evaluated by the Pentagon, which found that black and white military recruits with similar AFQT scores performed similarly on the job—indicating no racial bias.

The paper felt like an attack on what I knew. An assault on all those conversations with my grandmother, which taught me that racism—present-tense racism—dictated black-white inequality. I told myself that Messrs. Neal and Johnson, both of whom were white, were probably bigots, and I set out on a mission to disprove their work.

I vented about my battle with Messrs. Neal and Johnson to a fellow graduate student at Penn State, a white guy from the cornfields of Southern Illinois. He was no more at home at a top-25 economics doctoral program than I was, and we spent a fair amount of time together during our first year, staring at Euler equations in our favorite textbook, “Recursive Methods in Economic Dynamics.”

I told him I was sure discrimination was a bigger factor than Messrs. Neal and Johnson were letting on, but “I just can’t get this data to cooperate.” He asked why I was so convinced, and I erupted in a rant about the prevalence of racism and recognizing bigots on sight. My grandmother would’ve nodded rhythmically along. My friend responded with a burst of loud, sharp laughter in my face.

He pointed out how far I was straying from our Euler equations. How on any subject other than race, I would have never given in to such sloppy thinking. The double identity—a classically trained economist taught to tease out causal relationships and a black Southern boy taught that discrimination is ubiquitous—had lived seamlessly inside me until that moment. Messrs. Neal and Johnson, as it turns out, aren’t bigots, and their conclusions have stood the test of time and my attempts to disprove them. I extended their analysis to unemployment, teen pregnancy, incarceration and other outcomes—all of which follow the same pattern. Moreover, the relationship between skills and wages has been confirmed by study after study, even when using different data and methods. Kevin Lang, an economist at Boston University, corrects important issues in Messrs. Neal and Johnson’s work but finds the same relationship between AFQT scores and wages. Taken together, an honest review of the evidence suggests that current racial inequities are more a result of differences in skill than differences in treatment of those with the same skill.

I write this with some degree of trepidation, in part because I still have my grandmother in my ear and in part because I am keenly aware of the harm in underestimating bias. But there is also a cost to overemphasizing its impact. A black kid who believes he will face daunting societal obstacles is likely to underinvest in trying to climb society’s rungs. Every black student in the country needs to know that his return on investment in education is, if anything, higher than for white students.

The solution is neither to stop fighting biased behavior nor to curb honest inquiry about race in America. We shouldn’t stop searching for and penalizing discriminatory employers, or trying to reduce racial differences in police brutality, or estimating whether the value of a home appraisal depends on the race of the homeowner, or reducing bias in bail decisions by using artificial intelligence. I could go on, like the conversations stuck to those slipcovers. The solution isn’t to look away from discrimination. It does exist. But we also can’t point at every gap in outcomes and instantly conclude it’s racism. Prejudice must be measured rigorously. Statistically. Disparity doesn’t necessarily imply racism. It may feel omnipresent, but it isn’t all-powerful. Skills matter most.

Sunday, November 20, 2022

The Associated Press loses its credibility

 Jonathan Turley shows why the Associated Press has lost its credibility.

Voting fraud is not the only way to make an election dishonest. Media and Tech cooperation to bury relevant facts, including about candidates accomplishes the same thing. In my view, the latter has been far more serious than the former in distorting election results.

JT's example is only the tip of the iceberg.

The many members of the Educational Establishment, who have been largely responsible for the destruction of personal and professional standards in the media and elsewhere should be ashamed of themselves - but of course they are not - rather they are gloating about their success. Shame, shame, shame.

Here is JT's comment.

------------------------------------------------

For those of us who have written about the Hunter Biden scandal and the family’s influence-peddling operation for years, it is routine to read media stories denying the facts or dismissing calls to investigate the foreign dealings. However, this weekend, the Associated Press made a whopper of a claim that there is no evidence even suggesting that President Joe Biden ever spoke to his son about his foreign dealings. I previously discussed how the Bidens have succeeded in a Houdini-like trick in making this elephant of a scandal disappear from the public stage. They did so by enlisting the media in the illusion. However, this level of audience participation in the trick truly defies belief.

The statement of the Associated Press at this stage of the scandal is breathtaking but telling: “Joe Biden has said he’s never spoken to his son about his foreign business, and nothing the Republicans have put forth suggests otherwise.”

For years, the media has continued to report President Biden’s repeated claim that “I have never spoken to my son about his overseas business dealings.” At the outset, the media only had to suspend any disbelief that the president could fly to China as Vice President with his son on Air Force 2 without discussing his planned business dealings on the trip.

Of course, the emails on the laptop quickly refuted this claim. However, the media buried the laptop story before the election or pushed the false claim that it was fake Russian disinformation.

President Biden’s denials continued even after an audiotape surfaced showing President Biden leaving a message for Hunter specifically discussing coverage of those dealings. The call is specifically referring to these dealings:

“Hey pal, it’s Dad. It’s 8:15 on Wednesday night. If you get a chance just give me a call. Nothing urgent. I just wanted to talk to you. I thought the article released online, it’s going to be printed tomorrow in the Times, was good. I think you’re clear. And anyway if you get a chance give me a call, I love you.”

But who are you going to believe, the media or your own ears.

Some of us have written for two years that Biden’s denial of knowledge is patently false. It was equally evident that the Biden family was selling influence and access.

There are emails of Ukrainian and other foreign clients thanking Hunter Biden for arranging meetings with his father. There are photos from dinners and meetings that tie President Biden to these figures, including a 2015 dinner with a group of Hunter Biden’s Russian and Kazakh clients.

People apparently were told to avoid directly referring to President Biden. In one email, Tony Bobulinski, then a business partner of Hunter’s, was instructed by Biden associate James Gilliar not to speak of the former veep’s connection to any transactions: “Don’t mention Joe being involved, it’s only when u [sic] are face to face, I know u [sic] know that but they are paranoid.”

Instead, the emails apparently refer to President Biden with code names such as “Celtic” or “the big guy.” In one, “the big guy” is discussed as possibly receiving a 10 percent cut on a deal with a Chinese energy firm; other emails reportedly refer to Hunter Biden paying portions of his father’s expenses and taxes.

Bobulinski has given multiple interviews that he met twice with Joe Biden to discuss a business deal in China with CEFC China Energy Co. That would seem obvious evidence. In addition, the New York Post reported on a key email that discussed “the proposed percentage distribution of equity in a company created for a joint venture with CEFC China Energy Co.” That was the email on March 13, 2017 that included references of “10 held by H for the big guy.”

That brings us back to Houdini’s trick of making his 10,000 pound elephant Jennie disappear every night in New York’s Hippodrome. He succeeded night after night because the audience wanted the elephant to disappear even though it never left the stage.

I previously wrote about how the key to the trick was involving the media so that reporters are invested in the illusion like calling audience members to the stage. Reporters have to insist that there was nothing to see or they have to admit to being part of the original deception. The media cannot see the elephant without the public seeing something about the media in its past efforts to conceal it.

The media is now so heavily invested in the trick that they are sticking with the illusion even after “the reveal.” The Associated Press story shows that even pointing at the elephant — heck, even riding the elephant around the stage — will not dislodge these denials. This is no elephant because there cannot be an elephant. Poof!

Thursday, November 17, 2022

Clinton-Linked Dark Money Group Targets Advertisers to Stop Musk From Re-storing Free Speech Protections

 Jonathan Turley discusses one of the latest efforts to control speech.

The real issue here is controlling speech as an aid to winning elections.

--------------------------------

In the shift of the left against free speech principles, there is no figure more actively or openly pushing for censorship than Hillary Clinton. Now, reports indicate that Clinton has unleashed her allies in the corporate world to coerce Musk to restore censorship policies or face bankruptcy. The effort of the Clinton-linked “Accountable Tech” reveals the level of panic in Democratic circles that free speech could be restored on one social media platform. The group was open about how losing control over Twitter could result in a loss of control over social media generally. For Clinton, it is an “all-hands on deck” call for censorship. She previously called upon foreign governments to crackdown on the free speech of Americans on Twitter.

We have been discussing how Clinton and others have called on foreign countries to pass censorship laws to prevent Elon Musk from restoring free speech protections on Twitter. It seems that, after years of using censorship-by-surrogates in social media companies, Democratic leaders seem to have rediscovered good old-fashioned state censorship.

Accountable Tech led an effort to send a letter to top Twitter advertisers to force Musk to accept “non-negotiable” requirements for censorship.

General Motors was one of the first to pull its advertising funds to stop free speech restoration on the site.

Of course, the company had no problem with supporting Twitter when it was running one of the largest censorship systems in history — or supporting TikTok (which is Chinese owned and has been denounced for state control and access to data). Twitter has been denounced for years for its bias against conservative and dissenting voices, including presumably many GM customers on the right. None of that was a concern for GM but the pledge to restore free speech to Twitter warrants a suspension.

The letter is open about the potential cascading effect if free speech is restored on one platform: “While the company is hardly a poster-child for healthy social media, it has taken welcome steps in recent years to mitigate systemic risks, ratcheting up pressure on the likes of Facebook and YouTube to follow suit.”

The letter insists that free speech will only invite “disinformation, hate, and harassment” and that “[u]nder the guise of ‘free speech,’ [Musk’s] vision will silence and endanger marginalized communities, and tear at the fraying fabric of democracy.”

Among other things, the letter demands “algorithmic accountability,” a notable inclusion in light of Democratic politicians demanding enlightened algorithms to protect citizens from their own bad choices or thoughts.

In addition to Accountable Tech, twenty-five other groups signed the letter to demand the restoration of censorship policies, including Media Matters and the Black Lives Matter Global Network Foundation. Accountable Tech has partnered in the past with Hillary Clinton’s Onward Together nonprofit group.

I have no objection to boycotts, which are an important form of free speech. However, this boycott action is directed at restoring censorship and preventing others from being able to post or to read opposing viewpoints.

If consistent with their past records, these companies will likely cave to these demands. While the public has clearly shown that they want more (not less) free speech, these executives are likely to yield to the pressure of Clinton and other powerful figures to coerce Musk into limiting the speech of others on his platform.

These campaigns only add support to Musk’s push for alternative revenue sources, including verification fees. As I previously wrote, we can show that there is a market for free speech by supporting Twitter in trying to reduce the dependence on corporate sponsors. If Musk remains faithful to free speech, many customers are likely to join his platform and support his effort to reduce censorship on social media.

Monday, November 14, 2022

More evidence that freedom in the West is in a decline

 Jonathan Turley gets it right on freedom of speech and thought - or - as things seem to be heading - the lack thereof.

The issue is broader than Ireland - It is a scourge throughout the "civilized" world - and growing.

 Yee shall reap what yee sow.

--------------------------------------

We recently discussed a troubling conviction in Great Britain of a man for his “toxic ideology.” Now Ireland appears ready to replicate that case a thousand fold. The proposed Criminal Justice (Incitement to Violence or Hatred and Hate Offences) Bill 2022 would criminalize the possession of material deemed hateful. It is a full frontal assault on speech and associational rights. The law would allow for sweeping authoritarian measures in defining opposing viewpoints hateful. Ireland appears to be picking up the cudgel of speech criminalization from Britain, an abusive power once used against the Irish.

The law is a free speech nightmare. Even before addressing the crime of possession of harmful material, the law would “provide for an offence of condoning, denying or grossly trivialising genocide, war crimes, crimes against humanity and crimes against peace.” The crime of condoning, denying or grossly trivialising” criminal conduct would make most autocrats blush. The lack of any meaningful definition invites arbitrary enforcement. The law expressly states the intent to combat “forms and expressions of racism and xenophobia by means of criminal law.”

What is so striking about the law is how utterly unapologetic it is in the use of criminal law to curtail not just free speech but free thought. It allows for the prosecution of citizens for “preparing or possessing material likely to incite violence or hatred against persons on account of their protected characteristics.” That could sweep deeply into not just political but literary expression.

The interest of the Irish in assuming such authoritarian measures is chilling given their own history under British rule, including violent crackdowns on nonviolent protests like “Bloody Sunday.” Free speech is now in a free fall in Great Britain and Ireland appears eager to follow suit.

The decline of free speech in the United Kingdom has long been a concern for free speech advocates (here and here and here and here and here and here and here and here). Once you start as a government to criminalize speech, you end up on a slippery slope of censorship. What constitutes hate speech or “malicious communications” remains a highly subjective matter and we have seen a steady expansion of prohibited terms and words and gestures. That now includes criminalizing “toxic ideologies.”

Under this pernicious law, a judge can order the search of a home based solely on a police officer’s sworn statement that he or she has “reasonable” grounds to believe illegal material may be present in a person’s home.

Again, the embrace of such laws by the Irish is crushingly ironic. Frank Ryan, who fought against the treaty, spoke for many radicals in declaring “as long as we have fists and boots, there will be no free speech for traitors.” Those anti-Treaty forces rejected the views of free speech that long defined Western nations. Now, Ireland is declaring “no free speech for haters” and assumes the authority to define who are haters and who are not.

The Irish people struggled for generations for equality and freedom. To now pick up the mantle of suppressing viewpoints is to make of mockery of the long struggle.

Friday, November 11, 2022

Climate alarmists wrong again: Glacier National Park glaciers

 Judith Curry takes down Reilly Neill, a Montana politician - along with a few other unmentioned climate alarmists.

Here is the link.

Here are a few excerpts.

---------------------------------

The loss of glaciers from Glacier National Park is one of the most visible manifestations of climate change in the U.S. Signs were posted all around the park, proclaiming that the glaciers would be gone by 2020. In 2017, the Park started taking these signs down. What happened, beyond the obvious fact that the glaciers hadn’t disappeared by 2020?

Not only are Montana’s glaciers an important icon for global warming (e.g. Al Gore’s Inconvenient Truth), it also seems that the glaciers are an important political icon for progressive politicians in Montana. Earlier this week, Reilly Neill, a (sort of) politician in Montana, went after me on Twitter:

"Yes, We understand. Anything you say about climate is driven by potential profit for you and your company. Come check out the glaciers in Montana and talk to some real scientists if you ever get over yourself and your greed."

Well, it just so happens that I have some analyses of Montana glaciers and climate in my archives; maybe I can help Reilly (and the “real scientists of Montana”) understand what is going on.
----------
Variability of glaciers in Glacier National Park

The total area of Glacier National Park covered by glaciers shrank 70% from the1850s to 2015, according to US Geological Survey. Melting began at the end of the Little Ice Age (circa 1850) when scientists believe 146 glaciers covered the region, as opposed to 26 in 2019.

The first surveys of glaciers in Glacier National Park began in the 1880s, with most of the focus on the two largest glaciers – Grinnell and Sperry. A 2017 publication issued by the U.S. Geological Survey entitled Status of Glaciers in Glacier National Park [link] includes a table of the areal extent of named glaciers in the Glacier National Park since the Little Ice Age (LIA) with markers at LIA, 1966, 1998, 2005 and 2015. Analysis of these data show:A ~50% loss from LIA to 1966 (~115 years), averaging a loss of ~4.5% per decade.
Additional ~12% loss from 1966-98 (32 years), averaging a loss of ~3.7% per decade.
Additional ~4.75% loss from 1998-2015 (27 years), averaging a loss of ~1.75% per decade.

Much of the glacier loss occurred prior to 1966, when fossil-fueled warming was minimal. The percentage rate of glacier loss during this early period substantially exceeded the percentage rate of loss observed in the 21st century. I suspect that much of this melting occurred in the 1930’s (see next section).

Looking much further back, Glacier National Park was virtually ice free 11,000 years ago. Glaciers have been present within the boundaries of present-day Glacier National Park since about 6,500 years ago. [link] These glaciers have varied in size, tracking climatic variations, but did not grow to their recent maximum size until the end of the Little Ice Age, around 1850. An 80-year period (~1770-1840) of cool, wet summers and above-average winter snowfall led to a rapid growth of glaciers just prior to the end of the Little Ice Age. So, the recent loss of glacier mass must be understood in light of the fact the glaciers reached their largest mass for the past 11,000 years during the 19th century. [link]

The USGS hasn’t updated its glacial survey since 2015 (gotta wonder why, with the huge losses they were expecting). While the loss between 1998 and 2015 has decreased relative to prior decades, it appears that the ice loss has actually stalled or slightly reversed since 2008 [link] This stall caused the Glacier National Park in 2017 to start taking down the signs that expected the glaciers to disappear by 2020.
----------
Montana’s cold winters

The “greed” part of Reilly Neill’s twitter rant seems to have something to do with fossil fuels. If there is ever a place you might want to be kept warm by fossil fuels (or nuclear), Montana during winter is it. Montana is one of the coldest states in the U.S. Of particular concern are wintertime “Arctic outbreaks,” which occur multiple times each winter with varying magnitudes and durations. “Arctic outbreaks” periodically bring exceptionally cold temperatures to large regions of the continental U.S., even in this era of global warming.

A little known JC biographical fact is that Arctic cold air outbreaks and the formation of cold-core anticyclones was the topic of my PhD thesis). [link] [link]

An exceptionally cold outbreak occurred in Montana during February and March 2019, with similar outbreaks in 2014 and 2017. In February 2019, average temperature departures from normal in Montana were as much as 27 to 28 oF below normal, with Great Falls at the heart of the cold. Temperatures did not rise above 0 oF on 11 days and dropped to 0 oF or below on 24 nights. While the cold in February was remarkable for its persistence, the subsequent Arctic blast in early March 2019 delivered the coldest temperatures. Almost two dozen official stations in Montana broke monthly records, with an all-time record state low temperature for March of -46F. [link]

I can’t even imagine what it would be like to be without electric power and household heating under such cold conditions. Apart from freezing and figuring out how to keep warm, water pipes would be frozen; not just a lack of potable water, but massive property damage once the pipes thaw.

Fortunately, Montana has a reliable power system with about 50% renewables (mostly hydro) with most of the rest produced by coal. There is a nontrivial contingent in Montana that is seeking 100% renewable power (hydro, wind, solar).

In addition to exceptional power demand for residential heating during such Arctic outbreaks, any power generation from renewables is at a minimum during such periods. Montana’s solar and hydropower capacity are at their lowest during winter. While winter winds are generally strong, the Arctic cold air outbreaks are accompanied by large regions of high pressure that are called cold-core anticyclones The nature of these circulations is that wind speeds are very low within the high pressure system, resulting in very low amounts of wind power production.

While Arctic outbreaks generally impact the northern Great Plains states the worst, the spatial extent of these outbreaks can be very large. The cold outbreak during February 2021 that impacted Montana also covered half of the U.S. and extended down to Texas, where massive power outages ensued that resulted in considerable loss of life. The large horizontal scale of these high pressure systems indicates that remote transmission of excess energy from someplace else is not going to be of much help if much of the continent is also suffering from cold temperatures and low winds. The long duration of these events makes battery storage hugely infeasible. The options are nuclear, gas and coal.

Conclusion

Nothing is simple when it comes to understanding the causes of climate change impacts. The key to understanding is to look at the longest data records available, and try to interpret the causes of the historical and paleo variability. Once you understand the natural variability, you aren’t so prone to attributing everything to fossil-fueled warming and making naïve predictions of the future. And once you understand weather variability and extremes, you won’t be so enthusiastic about renewable energy.

I hope that this little exposition helps Reilly Neill and the real scientists of Montana understand the causes of the recent variations in Montana’s glaciers.