Monday, December 26, 2022

The FBI's downfall - the tip of the iceberg

 Here is Jonathan Turley on the FBI's shameful politicized behavior.

Jonathan Turley is the Shapiro Professor of Public Interest Law at George Washington University.

The primary threat to our Republic is the Government, including the Department of Justice and the FBI. A major secondary threat is the attitude of too many of our citizens who value forcing others to behave "acceptably" instead of valuing freedom first.
--------------------------

When the FBI Attacks Critics as “Conspiracy Theorists,” It’s Time to Reform the Bureau.

Below is my column in the Hill on the need for a new “Church Committee” to investigate and reform the Federal Bureau of Investigation (FBI) after years of scandals involving alleged political bias. In response to criticism over its role in Twitter’s censorship system, the FBI lashed out against critics as “conspiracy theorists” spreading disinformation. However, it still refuses to supply new information on other companies, beyond Twitter, that it has paid to engage in censorship.

Here is the column:

“Conspiracy theorists … feeding the American public misinformation” is a familiar attack line for anyone raising free-speech concerns over the FBI’s role in social media censorship. What is different is that this attack came from the country’s largest law enforcement agency, the FBI — and, since the FBI has made combatting “disinformation” a major focus of its work, the labeling of its critics is particularly menacing.

Fifty years ago, the Watergate scandal provoked a series of events that transformed not only the presidency but federal agencies like the FBI. Americans demanded answers about the involvement of the FBI and other federal agencies in domestic politics. Ultimately, Congress not only investigated the FBI but later impanelled the Church Committee to investigate a host of other abuses by intelligence agencies.

A quick review of recent disclosures and controversies shows ample need for a new Church Committee:
The Russian investigations

The FBI previously was at the center of controversies over documented political bias. Without repeating the long history from the Russian influence scandal, FBI officials like Peter Strzok were fired after emails showed open bias against presidential candidate Donald Trump. The FBI ignored warnings that the so-called Steele dossier, largely funded by the Clinton campaign, was likely used by Russian intelligence to spread disinformation. It continued its investigation despite early refutations of key allegations or discrediting of sources.
Biden family business

The FBI has taken on the character of a Praetorian Guard when the Biden family has found itself in scandals.

For example, there was Hunter Biden’s handgun, acquired by apparently lying on federal forms. In 2018, the gun allegedly was tossed into a trash bin in Wilmington, Del., by Hallie Biden, the widow of Hunter’s deceased brother and with whom Hunter had a relationship at the time. Secret Service agents reportedly appeared at the gun shop with no apparent reason, and Hunter later said the matter would be handled by the FBI. Nothing was done despite the apparent violation of federal law.

Later, the diary of Hunter’s sister, Ashley, went missing. While the alleged theft normally would be handled as a relatively minor local criminal matter, the FBI launched a major investigation that continued for months to pursue those who acquired the diary, which reportedly contains embarrassing entries involving President Biden. Such a massive FBI deployment shocked many of us, but the FBI built a federal case against those who took possession of the diary.
Targeting Republicans and conservatives

Recently the FBI was flagged for targeting two senior House Intelligence Committee staffers in grand jury subpoenas sent to Google. It has been criticized for using the Jan. 6 Capitol riot investigations to target conservative groups and GOP members of Congress, including seizing the phone of one GOP member.

The FBI also has been criticized for targeting pro-life violence while not showing the same vigor toward pro-choice violence.
Hunter’s laptop

While the FBI was eager to continue the Russian investigations with no clear evidence of collusion, it showed the opposite inclination when given Hunter Biden’s infamous laptop. The laptop would seem to be a target-rich environment for criminal investigators, with photos and emails detailing an array of potential crimes involving foreign transactions, guns, drugs and prostitutes. However, reports indicate that FBI officials moved to quash or slow any investigation.

The computer repairman who acquired the laptop, John Paul Mac Isaac, said he struggled to get the FBI to respond and that agents made thinly veiled threats regarding any disclosures of material related to the Biden family; he said one agent told him that “in their experience, nothing ever happens to people that don’t talk about these things.”
The ‘Twitter Files’

The “Twitter Files” released by Twitter’s new owner, Elon Musk, show as many as 80 agents targeting social-media posters for censorship on the site. This included alleged briefings that Twitter officials said was the reason they spiked the New York Post’s Hunter Biden laptop story before the 2020 election.

The FBI sent 150 messages on back channels to just one Twitter official to flag accounts. One Twitter executive expressed unease over the FBI’s pressure, declaring: “They are probing & pushing everywhere they can (including by whispering to congressional staff).”

We also have learned that Twitter hired a number of retired FBI agents, including former FBI general counsel James Baker, who was a critical and controversial figure in past bureau scandals over political bias.
Attacking critics

It is not clear what is more chilling — the menacing role played by the FBI in Twitter’s censorship program, or its mendacious response to the disclosure of that role. The FBI has issued a series of “nothing-to-see-here” statements regarding the Twitter Files.

In its latest statement, the FBI insists it did not command Twitter to take any specific action when flagging accounts to be censored. Of course, it didn’t have to threaten the company — because we now have an effective state media by consent rather than coercion. Moreover, an FBI warning tends to concentrate the minds of most people without the need for a specific threat.

Finally, the files show that the FBI paid Twitter millions as part of this censorship system — a windfall favorably reported to Baker before he was fired from Twitter by Musk.
Criticizing the FBI is now ‘disinformation’

Responding to the disclosures and criticism, an FBI spokesperson declared: “The men and women of the FBI work every day to protect the American public. It is unfortunate that conspiracy theorists and others are feeding the American public misinformation with the sole purpose of attempting to discredit the agency.”

Arguably, “working every day to protect the American public” need not include censoring the public to protect it from errant or misleading ideas.

However, it is the attack on its critics that is most striking. While the FBI denounced critics of an earlier era as communists and “fellow travelers,” it now uses the same attack narrative to label its critics as “conspiracy theorists.”

After Watergate, there was bipartisan support for reforming the FBI and intelligence agencies. Today, that cacophony of voices has been replaced by crickets, as much of the media imposes another effective blackout on coverage of the Twitter Files. This media silence suggests that the FBI found the “sweet spot” on censorship, supporting the views of the political and media establishment.

As for the rest of us, the FBI now declares us to be part of a disinformation danger which it is committed to stamping out — “conspiracy theorists” misleading the public simply by criticizing the bureau.

Clearly, this is the time for a new Church Committee — and time to reform the FBI.

Friday, December 23, 2022

Some truth about guns and gun laws

 Here is a link to testimony of John Lott before the Subcommittee on Crime, Terrorism, and Homeland Security, Committee on the Judiciary of the United States House of Representatives and the Senate.

JL is a leading expert on on the impact of gun laws.

This is a good source if you want to know the truth about guns and gun laws.

The Cancel Culture is alive and well – unfortunately

 Here is Jonathan Turley with yet another example of the unacceptable behavior of the Woke and Cancel Culture denizens.

Scientific American was lost years ago. Now, many other supposedly objective journals have joined it.

I have no respect for those who participate or encourage wokeness or canceling. I am for freedom, including speech, and for professional journals publishing competent papers no matter what the authors think about things outside their profession.

-------------------------------------

Webb of Lies? Astrophysicist Targeted Due to Study Exonerating James Webb of Being Anti-Gay.

The storied career of James Webbs as the second administrator of NASA (responsible for the Apollo missions) led to the naming of the space telescope in his honor. Now, however, he is the subject of a cancel campaign to remove his name after professors accused him of being anti-gay. That cancel campaign also now includes a black astrophysicist, Hakeem Oluseyi, who published a study exonerating Webb. He is reportedly being banned from leading journals after finding no evidence to support the claim. Regardless of the ultimate conclusions that one can reach on the Webb controversy, there should be universal concern over the growing intolerance for opposing views in academic institutional and journals.

The New York Times reported that Oluseyi, National Society for Black Physicists, was asked to look into the allegations made by physicist Chanda Prescod-Weinstein of the University of New Hampshire. She joined three other scientists in writing a Scientific American article demanding the renaming of the telescope because Webb “acquiesced to homophobic government policies during the 1950s and 1960s.”

The focus of the objection was the “Lavender Scare,” a period in which homosexual government officials faced intense investigations. President Dwight D. Eisenhower had declared in 1953 that homosexual government officials were a national security threat. That crackdown is the subject of a documentary film.

Many of us are familiar with this terrible period and how homosexual scientists and officials were often unable to serve their country due to the prejudice against their sexual orientation. The powerful movie “The Imitation Game” tells the story of World War II code-breaker and early computer pioneer Alan Turing, who was hounded over his homosexuality. It is a disgraceful chapter in our history.

While media like CNN have reported that Webb “isn’t mentioned in most government records or sources” about these investigations, historian David Johnson noted in 2004 that Webb had met with President Truman on the issue of gay officials when he served in the State Department.

Prescod-Weinstein and her co-authors put forward a petition to change the name. In addition to references to the “Lavender Scare,” they noted one particular case:

Notably, in the case Norton v. Macy, former NASA employee Clifford L. Norton sued for “review of his discharge for ‘immoral conduct’ and for possessing personality traits which render him ‘unsuitable for further Government employment.’”

Even though the Norton v. Macy case rose to prominence in 1969, the actual incident that led to Norton’s dismissal took place in 1963 while James Webb was NASA administrator. Norton was arrested by DC police after having been observed speaking with another man, and was brought in for questioning on suspicion of homosexuality. While at the police station, NASA Security Chief Fugler was summoned to the police station, where he participated in Norton’s interrogation. Upon Norton’s release by DC police, NASA Security Chief Fugler then took Norton to NASA Headquarters, where he continued to interrogate him until the following morning. NASA subsequently fired Norton for suspicion of homosexuality, based on activities he was suspected of conducting during his personal time. We do not know of any consequences for NASA Security Chief Fugler, who conducted an extrajudicial interrogation on federal property.

It was government policy at that time that you could not hold a clearance or work in sensitive areas if you were a homosexual. Some 1700 people signed the petition to remove Webb’s name without any further investigation.

NASA ultimately did conduct an investigation. In October, the agency reported that:

“NASA’s History Office conducted an exhaustive search through currently accessible archives on James Webb and his career,” the agency told CNN in October. “They also talked to experts who previously researched this topic extensively. NASA found no evidence at this point that warrants changing the name of the James Webb Space Telescope.”

That did not sit well with Prescod-Weinstein or others. They ultimately focused their ire on Oluseyi who was asked to study that history. He said that he was initially “sympathetic” to these claims but that, after researching the actual records, he wrote in Medium that “I can say conclusively that there is zero evidence that Webb is guilty of the allegations against him.”

I can understand that some may contest those findings. However, what followed was a cancel campaign that shifted to those who opposed it, particularly Oluseyi. Prescod-Weinstein insisted that NASA assigned Oluseyi to “impugn” her concerns and to provide a “shield” for Webb.

Others did not care what the investigation found. The New York Times reported that the Britain’s Royal Astronomical Society declared that “no astronomer who submits a paper to its journals should type the words ‘James Webb.'” The American Astronomical Society, and the publications Nature, New Scientist and Scientific American have also reportedly declared the case closed against Webb.

Oluseyi was soon also tagged and said that he was been unable to have letters published in the journal that attempted to point out the allegedly flawed evidence cited by Prescod-Weinstein and others. So, not only are journals declaring the matter effectively closed, but they will not allow readers to see opposing views.

Even former colleagues publicly denounced Oluseyi. George Mason’s Peter Plavchan, who said that he welcomed Oluseyi to that school as a visiting professor, tweeted a note to Prescod-Weinstein that “I do believe [Oluseyi] owes you and LGBTQ+ astronomers an apology.”

Yet, other academics have raised concerns over this intolerant and anti-intellectual response.

David Johnson teaches history at the University of South Florida and is the author of “The Lavender Scare: The Cold War Persecution of Gays and Lesbians in the Federal Government.” He objected that Prescod-Weinstein and others “ignore the historical context.” He further noted that “Mr. Webb did not lead efforts to oust gays; there was not yet a gay rights movement in 1949; and to apply the term homophobe is to use a word out of time and reflects nothing Mr. Webb is known to have written or said “No one in government could stand up at that time and say ‘This is wrong.’ And that includes gay people.”

The campaign is continuing to rename the telescope. Prescod-Weinstein wants the telescope be named for Harriet Tubman. She insisted in a CNN interview that

“There are people who have argued that Harriet Tubman wasn’t a ‘real scientist.’ But to do science is to apply rational knowledge of the physical world. Harriet Tubman represents the best of humanity, and we should be sending the best of what we have to offer into the sky.”

Prescod-Weinstein, who publicly identifies as “all #BLACKandSTEM/all Jewish. queer/agender/woman,” has previously been herself the subject of controversy with critics noting that she previously argued that “antisemitism by black people are due to the influence of white gentiles: “White Jews adopted whiteness as a social praxis and harmed Black people in the process […] Some Black people have problematically blamed Jewishness for it.”

In the end, being a free speech advocate means that you support all of these figures in this debate. I would be equally opposed to efforts to seek to fire or cancel Dr. Prescod-Weinstein.

This is the type of debate that was once welcomed on campuses and in academic journals. Reasonable minds can disagree on the underlying facts and their meaning. What concerns me is that, despite a division of opinion among academics, there is yet another cancel campaign that will not tolerate opposing views to be voiced.

It is, unfortunately, all too familiar today. Cancel campaigns have become a type of academic credential. It is not enough to disagree with a fellow academic. You must now seek to silence him or fire him. If you believe in a cause, anyone voicing an opposing position is now viewed as intolerable.

In my brief review of these articles, there does not appear to be much in terms of direct evidence against Webb. I am certainly open to allegations based on new evidence, but it is less likely that we will see such discussions after the treatment of Dr. Oluseyi and others who have raised objections.

Wednesday, December 21, 2022

The FBI’s Role in Suppressing the Hunter Laptop Story

 Here is Jonathan Turley on the FBI's politicization in favor of the Democrats.

There is no question that there was election fraud in 2020. The only issue is what kinds. No doubt both the Democrats and Republicans have engaged in election fraud of one kind or another. However, the Democrats have proved far more "effective" at it than the Republicans - hence are far more dangerous.

At this point, it is clear that justice in the US is one-sided and politicized in favor of the Democrats.

The current widespread lack of ethics in our society reflects the active cooperation of many in the Media, Big Tech, and Education - the latter having become, to an alarming extent, an indoctrination establishment for Progressive and other unworthy ideas.

Our freedom and justice already have been substantially damaged by unethical partisans. I don't see it getting any better. Too many voters are uninformed, intolerant, and would like nothing better than to force others to "behave and believe properly".

Here is JT's comment.

---------------------------------------------

“Probing & Pushing Everywhere”: New Twitter Releases Confirm the FBI’s Role in Suppressing the Hunter Laptop Story

Below is my column on Fox.com on the most recent release of Twitter files detailing the FBI’s direct involvement in the targeting and censoring of citizens. The most notable aspect is the effort by the FBI to censor references to the Hunter Biden scandal before the 2020 election. Here is the column:

“They are probing & pushing everywhere.” That line sums up a increasingly alarming element in the seventh installment of the so-called “Twitter files.” “They” were the agents of Federal Bureau of Investigation and they were pushing for the censorship of citizens in an array of stories.

Writer Michael Shellenberger added critical details on how the FBI was directly engaged in censorship at the company. However, this batch of documents contains a particularly menacing element to the FBI/Twitter censorship alliance. The documents shows what writer Shellenberger described as a concentrated effort “to discredit leaked information about Hunter Biden before and after it was published.”

Twitter has admitted that it made a mistake in blocking the Hunter laptop story. after roughly two years, even media that pushed the false “Russian disinformation” claims have acknowledged that the laptop is authentic.

Yet, those same networks and newspapers are now imposing a new de facto blackout on covering the details of the Twitter files on the systemic blacklisting, shadow banning, and censorship carried out in conjunction with the government.

The references to the new Hunter Biden evidence were also notable in the dates of these backchannel communications. On October 13, weeks before the election, FBI Special Agent Elvis Chan sent 10 documents to Twitter’s then-Head of “Trust & Safety” Yoel Roth related to the Biden story. It was the next day that New York Post ran its story on the laptop and its incriminating content. The United States government played a key role trying to bury a story damaging to the Democrats before the election.

The Twitter files now substantiate the earlier allegations of “censorship for surrogate” or proxy. While the First Amendment applies to the government, it can also apply to agents of the government. Twitter itself now admits that it acted as an agent in these efforts.

The current media blackout on the Twitter files story only deepens these concerns. For years, media figures have denied Twitter was engaging in censorship, blacklisting, shadow banning and other techniques targeting conservatives. The release of the files have shattered those denials. There is simply no further room for censorship apologists.

In a city that relies on “plausible deniability,” there is no longer a plausible space left in the wake of the releases. All that remains is silence — the simple refusal to acknowledge the government-corporate alliance in this massive censorship system.

To cover the story is to admit that the media also followed the same course as Twitter in hampering any discussion of this influence peddling scandal. Indeed, while media is now forced to admit that the laptop is authentic, it cannot get itself to address the authentic emails contained in that laptop. Those emails detail millions of dollars in influence peddling by the Biden family. They also detail the knowledge and involvement of Joe Biden despite his repeated denial of any knowledge of the deals.

Those files also raise potential criminal acts that some of us have been writing about for two years. The emails are potentially incriminating on crimes ranging from tax violations to gun violations. In the very least, t is a target rich environment for investigators or prosecutors.

Yet, earlier disclosures showed that key FBI figures tamped down any investigation into the laptop. The latest documents now show the FBI also actively pressured the media to kill the story. That raises deeply troubling questions of the FBI politicalization. After Watergate, the Congress moved aggressively to pursue the use of the bureau by a president for political purposes. There is little call from the media for such an investigation today when the bureau is accused of working for Democratic rather than Republican interests.

The record of such bias extends beyond the Twitter files. In the prior years, FBI agents were found to have shown overt political bias in the handling of FBI investigation. The agency continued to rely on sources like the Steele dossier despite warnings that the Clinton-funded report was likely Russian disinformation. Yet, when it came to Hunter Biden, the FBI reportedly was not interested in aggressively pursuing an investigation while calling on social media companies to censor any discussion of the scandal before the election. It continued to do so despite Twitter executives “repeatedly” indicating there was “very little” Russian activity on the platform.

In January 2020, Twitter’s then director of policy and philanthropy, Carlos Monje Jr., expressed unease on the pressure coming from the FBI and said “They are probing & pushing everywhere they can (including by whispering to congressional staff).”

The question is why the FBI would be “probing & pushing everywhere” despite the fact that the Russian investigation had exposed prior bias related to the 2016 election. That was no deterrent to killing a story viewed as damaging to the Biden campaign.

In the end, the government-corporate alliance failed. Despite the refusal of many in the media to cover the Twitter files, nearly two-thirds of voters believe Twitter shadow-banned users and engaged in political censorship during the 2020 election. Seventy percent of voters want new national laws protecting users from corporate censorship.

It is clear that any such reforms should include a full investigation of the FBI and its involvement in censorship efforts. As many as 80 agents reportedly were committed to this effort. It is clear now that, if we are to end censorship by surrogate, the House will have to “probe and push everywhere” in the FBI for answers.

Sunday, December 11, 2022

Judith Curry explains what is wrong with IPCC and climate alarmism

 Here is a link to a video of an interview with Judith Curry - a top climate scientist.

JC makes clear that the widespread push portray climate change as existential is politics not science.

Friday, December 02, 2022

“Colorful Fluid dynamics” and overconfidence in global climate models

 Here is David Young at Judith Curry's blog.

David Young received a PhD in mathematics in 1979 from the University of Colorado-Boulder. After completing graduate school, Dr. Young joined the Boeing Company and has worked on a wide variety of projects involving computational physics, computer programming, and numerical analysis. His work has has been focused on the application areas of aerodynamics, aeroelastics, computational fluid dynamics,airframe design, flutter, acoustics, and electromagnetics. To address these applications, he has done original theoretical work in high performance computing, linear potential flow and boundary integral equations, nonlinear potential flow, discretizations for the Navier-Stokes equations, partial differential equations and the finite element method, preconditioning methods for large linear systems, Krylov subspace methods for very large nonlinear systems, design and optimization methods, and iterative methods for highly nonlinear systems.

The moral of his story is (as I see it - and only slightly exaggerated):

  • The Global Climate Models (GCM) that the alarmists, media, politicians, etc. rely on for their uninformed comments are not accurate and cannot be relied upon and do not justify climate alarmism.
  • The climate alarmists do not know what they are talking about.
  • Many climate scientists - including some of the GCM creators - do not know what they are talking about - being unaware of the mathematical issues DY discusses.
  • Some climate scientists do know that the GCMs are unreliable because they are unaware of the mathematical issues DY discusses.
  • Some climate scientists - the ones that do understand the mathematical issues that DY discusses - are dishonest about the GCMs' reliability because it is in their personal interest to be so.
  • Any one who uses the term "climate denier" in an attack mode is wither ignorant about the issues or dishonest.
Here is the article.
----------------------------------------
This post lays out in fairly complete detail some basic facts about Computational Fluid Dynamics (CFD) modeling. This technology is the core of all general circulation models of the atmosphere and oceans, and hence global climate models (GCMs). I discuss some common misconceptions about these models, which lead to overconfidence in these simulations. This situation is related to the replication crisis in science generally, whereby much of the literature is affected by selection and positive results bias.

A full-length version of this article can be found at [ lawsofphysics1 ], including voluminous references. See also this publication [ onera ]

1 Background

Numerical simulation over the last 60 years has come to play a larger and larger role in engineering design and scientific investigations. The level of detail and physical modeling varies greatly, as do the accuracy requirements. For aerodynamic simulations, accurate drag increments between configurations have high value. In climate simulations, a widely used target variable is temperature anomaly. Both drag increments and temperature anomalies are particularly difficult to compute accurately. The reason is simple: both output quantities are several orders of magnitude smaller than the overall absolute levels of momentum for drag or energy for temperature anomalies. This means that without tremendous effort, the output quantity is smaller than the numerical truncation error. Great care can sometimes provide accurate results, but careful numerical control over all aspects of complex simulations is required.

Contrast this with some fields of science where only general understanding is sought. In this case qualitatively interesting results can be easier to provide. This is known in the parlance of the field as “Colorful Fluid Dynamics.” While this is somewhat pejorative, these simulations do have their place. It cannot be stressed too strongly however that even the broad “patterns” can be quite wrong. Only after extensive validation can such simulations be trusted qualitatively, and even then only for the class of problems used in the validation. Such a validation process for one aeronautical CFD code consumed perhaps 50-100 man years of effort in a setting where high quality data was generally available. What is all too common among non-specialists is to conflate the two usage regimes (colorful versus validated) or to make the assumption that realistic looking results imply quantitatively meaningful results.

The first point is that some fields of numerical simulation are very well founded on rigorous mathematical theory. Two that come to mind are electromagnetic scattering and linear structural dynamics. Electromagnetic scattering is governed by Maxwell’s equations which are linear. The theory is well understood, and very good numerical simulations are available. Generally, it is possible to develop accurate methods that provide high quality quantitative results. Structural modeling in the linear elasticity range is also governed by well posed elliptic partial differential equations.

2 Computational Fluid Dynamics

The Earth system with its atmosphere and oceans is much more complex than most engineering simulations and thus the models are far more complex. However, the heart of any General Circulation Model (GCM) is a “dynamic core” that embodies the Navier-Stokes equations. Primarily, the added complexity is manifested in many subgrid models of high complexity. However, at some fundamental level a GCM is computational fluid dynamics. In fact GCM’s were among the first efforts to solve the Navier-Stokes equations and many initial problems were solved by the pioneers in the field, such as the removal of sound waves. There is a positive feature of this history in that the methods and codes tend to be optimized quite well within the universe of methods and computers currently used. The downside is that there can be a very high cost to building a new code or inserting a new method into an existing code. In any such effort, even real improvements will at first appear to be inferior to the existing technology. This is a huge impediment to progress and the penetration of more modern methods into the codes.

The best technical argument I have heard in defense of GCM’s is that Rossby waves are vastly easier to model than aeronautical flows where the pressure gradients and forcing can be a lot higher. There is some truth in this argument. The large-scale vortex evolution in the atmosphere on shorter time scales is relatively unaffected by turbulence and viscous effects, even though at finer scales the problem is ill-posed. However, there are many other at least equally important components of the earth system. An important one is tropical convection, a classical ill-posed problem because of the-large scale turbulent interfaces and shear layers. While usually neglected in aeronautical calculations, free air turbulence is in many cases very large in the atmosphere. However, it is typically neglected outside the boundary layer in GCMs. And of course there are clouds, convection and precipitation, which have a very significant effect on overall energy balance. One must also bear in mind that aeronautical vehicles are designed to be stable and to minimize the effects of ill-posedness, in that pathological nonlinear behaviors are avoided. In this sense aeronautical models may be actually easier to model than the atmosphere. In any case aeronautical simulations are greatly simplified by a number of assumptions, for example that the onset flow is steady and essentially free of atmospheric turbulence. Aeronautical flows can often be assumed to be essentially isentropic outside the boundary layer.

As will be argued below, the CFD literature is affected by positive results and selection bias. In the last 20 years, there has been increasing consciousness of and documentation of the strong influence that biased work can have on the scientific literature. It is perhaps best documented in the medical literature where the scientific communities are very large and diverse. These biases must be acknowledged by the community before they can be addressed. Of course, there are strong structural problems in modern science that make this a difficult thing to achieve.

Fluid Dynamics is a much more difficult problem than electromagnetic scattering or linear structures. First many of the problems are ill posed or nearly so. As is perhaps to be expected with nonlinear systems, there are also often multiple solutions. Even in steady RANS (Reynolds Averaged Navier-Stokes) simulations there can be sensitivity to initial conditions or numerical details or gridding. The AIAA Drag Prediction Workshop Series has shown the high levels of variability in CFD simulations even in attached mildly transonic and subsonic flows. These problems are far more common than reported in the literature.

Another problem associated with nonlinearity in the equations is turbulence, basically defined as small scale fluctuations that have random statistical properties. There is still some debate about whether turbulence is completely represented by accurate solutions to the Navier-Stokes equations, even though most experts believe that it is. But the most critical difficulty is the fact that in most real life applications the Reynolds number is high or very high. The Reynolds number represents roughly the ratio of inertial forces to viscous forces. One might think if the viscous forcing was 4 to 7 orders of magnitude smaller than the inertial forcing (as it is for example in many aircraft and atmospheric simulations), it could be neglected. Nothing could be further from the truth. The inclusion of these viscous forces often results in an O(1) change in even total forces. Certainly, the effect on smaller quantities like drag is large and critical to successful simulations in most situations. Thus, most CFD simulations are inherently numerically difficult and simplifications and approximations are required. There is a vast literature on these subjects going back to the introduction of the digital computer; John Von Neumann made some of the first forays into understanding the behaviour of discrete approximations.

The discrete problem sizes required for modeling fluid flows by resolving all the relevant scales grow as Reynolds number to the power 9/4 in the general case, assuming second order numerical discretizations. Computational effort grows at least linearly with discrete problem size multiplied by the number of time steps. Time steps must also decrease as the spatial grid is refined because of the stability requirements of the Courant-Freidrichs-Levy condition as well as to control time discretization errors. The number of time steps grows as Reynolds number to the power 3/4. Thus overall computational effort grows with Reynolds number to the power 3. Thus, for almost all problems of practical interest, it is computationally impossible (and will be for the forseeable future) to resolve all the important scales of the flow and so one must resort to subgrid models of fluctuations not resolved by the grid. For many idealized engineering problems, turbulence is the primary effect that must be so modeled. In GCMs there are many more, such as clouds. References are given in the full paper for some other views that may not fully agree with the one presented here in order to give people a feel for the range of opinion in the field.

For modeling the atmosphere, the difficulties are immense. The Reynolds numbers are high and the turbulence levels are large but highly variable. Many of the supposedly small effects must be neglected based on scientific judgment. There are also large energy flows and evaporation and precipitation and clouds, which are all ignored in virtually all aerodynamic simulations for example. Ocean models require different methods as they are essentially incompressible. This in some sense simplifies the underlying Navier-Stokes equations but adds mathematical difficulties.

2.1 The Role of Numerical Errors in CFD

Generally, the results of many steady state aeronautical CFD simulations are reproducible and reliable for thin boundary and shear layer dominated flows by assuming little flow separation and subsonic flow. There are now a few codes that are capable of demonstrating grid convergence for the simpler geometries or lower Reynolds numbers. However, many of these simulations make many simplifying assumptions and uncertainty is much larger for separated or transonic flows.

The contrast with climate models speaks for itself. Typical grid spacings in climate models are often exceed 100 km and their vertical grid resolution is almost certainly inadequate. Further many of the models use spectral methods that are not fully stable. Various forms of filtering are used to remove undesirable oscillations. Further, the many subgrid models are solved sequentially, adding another source of numerical errors and making tuning problematic.

2.2 The Role of Turbulence and Chaos in Fluid Mechanics

In this section I describe some well verified science from fluid mechanics that govern all Navier-Stokes simulations and that must inform any non-trivial discussion of weather or climate models. One of the problems in climate science is lack of fundamental understanding of these basic conclusions of fluid mechanics or (as perhaps the case may be for some) a reluctance to discuss the consequences of this science.

Turbulence models have advanced tremendously in the last 50 years and climate models do not use the latest of these models, so far as I can tell. Further, for large-scale vortical 3D flow, turbulence models are quite inadequate. Nonetheless, proper modeling of turbulence by solving auxiliary differential equations is critical to achieving reasonable accuracy.

Just to give one fundamental problem that is a showstopper at the moment: how to control numerical error in any time accurate eddy resolving simulation. Classical methods fail. How can one tune such a model? You can tune it for a given grid and initial condition, but that tuning might fail on a finer grid or with different initial conditions. This problem is just now beginning to be explored and is of critical importance for predicting climate or any other chaotic flow.

When truncation errors are significant (as they are in most practical fluid dynamics simulations particularly climate simulations), there is a constant danger of “overtuning” subgrid models, discretization parameters or the hundreds of other parameters. The problem here is that tuning a simulation for a few particular cases too accurately is really just getting large errors to cancel for these cases. Thus skill will actually be worse for cases outside the tuning set. In climate models the truncation errors are particularly large and computation costs too high to permit systematic study of the size of the various errors. Thus tuning is problematic.

2.3 Time Accurate Calculations – A Panacea?

All turbulent flows are time dependent and there is no true steady state. However, using Reynolds averaging, one can separate the flow field into a steady component and a hopefully small component consisting of the unsteady fluctuations. The unsteady component can then be modeled in various ways. The larger the truly unsteady component is, the more challenging the modeling problem becomes.

One might be tempted to always treat the problem as a time dependent problem. This has several challenges, however. At least in principle (but not always in practice) one should be able to use conventional numerical consistency checks in the steady state case. For example, one can check grid convergence, calculate sensitivities for parameters cheaply using linearizations, and use the residual as a measure of reliability. For the Navier-Stokes equations, there is no rigorous proof that the infinite grid limit exists or is unique. In fact, there is strong evidence for multiple solutions, some corresponding to states seen in testing, and others not. All these conveniences are either inapplicable to time accurate simulations or are much more difficult to assess.

Time accurate simulations are also challenging because the numerical errors are in some sense cumulative, i.e., an error at a given time step will be propagated to all subsequent time steps. Generally, some kind of stability of the underlying continuous problem is required to achieve convergence. Likewise a stable numerical scheme is helpful.

For any chaotic time accurate simulation, classical methods of numerical error control fail. Because the initial value problem is ill-posed, the adjoint diverges. This is a truly daunting problem. We know numerical errors are cumulative and can grow nonlinearly, but our usual methods are completely inapplicable.

For chaotic systems, the main argument that I have heard for time accurate simulations being meaningful is “at least there is an attractor.” The thinking is that if the attractor is sufficiently attractive, then errors in the solution will die off or at least remain bounded and not materially affect the time average solution or even the “climate” of the solution. The solution at any given time may be wildly inaccurate in detail as Lorenz discovered, but the climate will (according to this argument) be correct. At least this is an argument that can be developed and eventually quantified and proven or disproven. Paul Williams has a nice example of the large effect of the time step on the climate of the Lorentz system. Evidence is emerging of a similar effect due to spatial grid resolution for time accurate Large Eddy Simulations and a disturbing lack of grid convergence. Further, the attractor may be only slightly attractive and there will be bifurcation points and saddle points as well. And, the attractor can be of very high dimension, meaning that tracing out all its parts could be computationally a monumental if not impossible task. So far, the bounds on attractor dimension are very large. My suggestion would be to develop and fund a large long term research effort in this area with the best minds in the field of nonlinear theory. Theoretical understanding may not be adequate at the present time to address it computationally. There is some interesting work by Wang at MIT on shadowing that may eventually be computationally feasible that could address some of the stability issues for the long-term climate of the attractor. For the special case of periodic or nearly periodic flows, another approach that is more computationally tractable is windowing. This problem of time accurate simulations of chaotic systems seems to me to be a very important unsolved question in fundamental science and mathematics and one with tremendous potential impact across many fields.

While climate modelers Palmer and Stevens’ 2019 short perspective note (see full paper for the reference) is an excellent contribution by two unusually honest scientists, there is in my opinion reason for skepticism about their proposal to make climate models into eddy resolving simulations. Their assessment of climate models is in my view mostly correct and agrees with the thrust of this post, but there are a host of theoretical issues to be resolved before casting our lot with largely unexplored simulation methods that face serious theoretical challenges. Dramatic increases in resolution are obviously sorely needed in climate models and dramatic improvements may be possible in subgrid models once resolution is improved. Just as an example, modern PDE based models may make a significant difference. I don’t think anyone knows the outcomes of these various steps toward improvement.

3 The “Laws of Physics”

The “laws of physics” are usually thought of as conservation laws, the most important being conservation of mass, momentum, and energy. The conservation laws with appropriate source terms for fluids are the Navier- Stokes equations. These equations correctly represent the local conservation laws and offer the possibility of numerical simulations. This is expanded on in the full paper.
3.1 Initial Value Problem or Boundary Value Problem?

One often hears that “the climate of the attractor is a boundary value problem” and therefore it is predictable. This is nothing but an assertion with little to back it up. And of course, even assuming that the attractor is regular enough to be predictable, there is the separate question of whether it is computable with finite computing time. It is similar to the folk doctrine that turbulence models convert an ill-posed time dependent problem into a well posed steady state one. This doctrine has been proven to be wrong – as the prevalence of multiple solutions discussed above shows. However, those who are engaged in selling CFD have found it attractive despite its unscientific and effectively unverifiable nature.

A simple analogy for the climate system might be a wing as Nick Stokes has suggested. As pointed out above, the drag for a well-designed wing is in some ways a good analogy for the temperature anomaly of the climate system. The climate may respond linearly to changes in forcings over a narrow range. But that tells us little. To be useful, one must know the rate of response and the value (the value of temperature is important for example for ice sheet response). These are strongly dependent on details of the dynamics of the climate system through nonlinear feedbacks.

Many use this analogy to try to transfer the credibility [not fully deserved] from CFD simulations of simple systems to climate models or other complex separated flow simulations. This is not a correct implication. In any case, even simple aeronautical simulations can have very high uncertainty when used to simulate challenging flows.

3.2 Turbulence and SubGrid Models

Subgrid turbulence models have advanced tremendously over the last 50 years. The subgrid models must modify the Navier-Stokes equations if they are to have the needed effect. Turbulence models typically modify the true fluid viscosity by dramatically increasing it in certain parts of the flow, e.g., a boundary layer. The problem here is that these changes are not really based on the “laws of physics”, and certainly not on the conservation laws. The models are typically based on assumed relationships that are suggested by limited sets of test data or by simply fitting available test data. They tend to be very highly nonlinear and typically make an O(1) difference in the total forces. As one might guess, this area is one where controversy is rife. Most would characterize this as a very challenging problem, in fact one that will probably never be completely solved, so further research and controversy is a good thing.

Negative results about subgrid models have begun to appear. One recent paper shows that cloud microphysics models have parameters that are not well constrained by data. Using plausible values, ECS (equilibrium climate sensitivity) can be “engineered” over a significant range. Another interesting result shows that model results can depend strongly on the order chosen to solve the numerous subgrid models in a given cell. In fact, the subgrid models should be solved simultaneously so that any tuning is more independent of numerical details of the methods used. This is a fundamental principle of using such models and is the only way to ensure that tuning is meaningful. Indeed, many metrics for skill are poorly replicated by current generation climate models, particularly regional precipitation changes, cloud fraction as a function of latitude, Total Lower Troposphere temperature changes compared to radiosondes and satellite derived values, tropical convection aggregation and Sea Surface Temperature changes, just to name a few. This lack of skill for SST changes seems to be a reason why GCM model-derived ECS is inconsistent with observationally constrained energy balance methods.

Given the large grid spacings used in climate models, this is not surprising. Truncation errors are almost certainly larger than the changes in energy flows that are being modeled. In this situation, skill is to be expected only on those metrics involved in tuning (either conscious or subconscious) or metrics closely associated with them. In layman’s terms, those metrics used in tuning come into alignment with the data only because of cancellation of errors.

One can make a plausible argument for why models do a reasonable job of replicating the global average surface temperature anomaly. The models are mostly tuned to match top of atmosphere radiation balance. If their ocean heat uptake is also consistent with reality (and it seems to be pretty close) and if the models conserve energy, one would expect the average temperature to be roughly right even if it is not explicitly used for tuning. However, this apparent skill does not mean that other outputs will also be skillful.

This problem of inadequate tuning and unconscious bias plagues all application areas of CFD. A typical situation involves a decades long campaign of attempts to apply a customer’s favorite code to an application problem (or small class of problems). Over the course of this campaign many, many combinations of gridding and other parameters are “tried” until an acceptable result is achieved. The more challenging issue of establishing the limitations of this acceptable “accuracy” for different types of flows is often neglected because of lack of resources. Thus, the cancellation of large numerical errors is never quantified and remains hidden, waiting to emerge when a more challenging problem is attempted.

3.3 Overconfidence and Bias

As time passes, the seriousness of the bias issue in science continues to be better documented and understood. One recent example quotes one researcher as saying “Loose scientific methods are leading to a massive false positive bias in the literature.” Another study states:

“Poor research design and data analysis encourage false-positive findings. Such poormethods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science.”

In less scholarly settings, these results are typically met with various forms of rationalization. Often we are told that “the fundamentals are secure” or “my field is different” or “this affects only the medical fields.” To those in the field, however, it is obvious that strong positive bias affects the Computational Fluid Dynamics literature for the reasons described above and that practitioners are often overconfident.

This overconfidence in the codes and methods suits the perceived self-interest of those applying the codes (and for a while suited the interests of the code developers and researchers), as it provides funding to continue development and application of the models to ever more challenging problems. Recently, this confluence of interests has been altered by an unforeseen consequence, namely laymen who determine funding have come to believe that CFD is a solved problem and hence have dramatically reduced the funding stream for fundamental development of new methods and also for new theoretical research. This conclusion is an easy one for outsiders to reach given the CFD literature, where positive results predominate even though we know the models are just wrong both locally and globally for large classes of flows, for example strongly separated flows. Unfortunately, this problem of bias is not limited to CFD, but I believe is common in many other fields that use CFD modeling as well.

Another rationalization used to justify confidence in models are appeals to the “laws of physics” as discussed above. These appeals however omit a very important source of uncertainty and seem to provide a patina of certainty covering a far more complex reality.

Another corollary of the doctrine of the “laws of physics” is the idea that “more physics” must be better. Thus, simple models that ignore some feedbacks or terms in the equations are often maligned. This doctrine also suits the interest of some in the community, i.e., those working on more complex and costly simulations. It is also a favored tactic of Colorful Fluid Dynamics to portray the ultimately accurate simulation as just around the corner if we get all the “physics” included and use a sufficiently massive parallel computer. This view is not an obvious one when critically examined. It is widely held however among both people who run and use CFD results and those who fund CFD.

3.4 Further Research

So what is the future of such simulations and GCMs? As attempts are made to use them in areas where public health and safety are at stake, estimating uncertainty will become increasingly important. Items deserving attention in my opinion are discussed in some detail in the full paper, posted here on Climate Etc. I would argue that the most important elements needing attention, both in CFD and in climate and weather modeling, are new theoretical work and insights and the development of more accurate data. The latter work is not glamorous and the former can entail career risks. These are hard problems. and in many cases, a particular line of enquiry will not yield anything really new.

The dangers to be combatted include:It is critical to realize that the literature is biased and that replication failures are often not published.
We really need to escape from the elliptic boundary value problem (well posed) mental model that are held by so many with a passing familiarity with the issues. A variant of this mental model one encounters in the climate world is the doctrine of “converting an initial value problem to a boundary value problem.” This just confuses the issue, which is really about the attractor and its properties. The methods developed for well-posed elliptic problems have been pursued about as far as they will take us. However, this mental model can result in dramatic overconfidence in models in CFD.
A corollary of the “boundary value problem” misnomer is the idea that “If I run the model right, the answer will be right” mental model. This is patently false and even dangerous, however, it gratifies egos and aids in marketing.

4 Conclusion

I have tried to lay out in summary form some of the issues with high Reynolds number fluid simulations and to highlight the problem of overconfidence as well as some avenues to try to fundamentally advance our understanding. Laymen need to be aware of the typical tactics of the dark arts of “Colorful Fluid Dynamics” and “science communication.” It is critical to realize that much of the literature is affected by selection and positive results bias. This is something that most will admit privately, but is almost never publicly discussed.

How does this bias come about? An all too common scenario is for a researcher to have developed a new code or a new feature of an old code or to be trying to apply an existing code or method to a particular test case of interest to a customer. The first step is to find some data that is publicly available or obtain customer supplied data. Much of the older and well documented experiments involve flows that are not tremendously challenging. One then runs the code or model (adjusting grid strategies, discretization and solver methodologies, and turbulence model parameters or methods) until the results match the data reasonably well. Then the work often stops (in many cases because of lack of funding or lack of incentives to draw more scientifically balanced conclusions) and is published. The often large number of runs with different parameters that provided less convincing results are explained as due to “bad gridding,” “inadequate parameter tuning,” “my inexperience in running the code,” etc. The supply of witches to be burned is seemingly endless. These rationalizations are usually quite honest and sincerely believed, but biased. They are based on a cultural bias that if the model is “run right” then the results will be right, if not quantitatively, then at least qualitatively. As we saw above, those who develop the models themselves know this to be incorrect as do those responsible for using the simulations where public safety is at stake. As a last resort one can always point to any deficiencies in the data or for the more brazen, simply claim the data is wrong since it disagrees with the simulation. The far more interesting and valuable questions about robustness and uncertainty or even structural instability in the results are often neglected. One logical conclusion to be drawn from the perspective by Palmer and Stevens calling for eddy resolving climate models is that the world of GCM’s is little better. However, this paper is a hopeful sign of a desire to improve and is to be strongly commended.

This may seem a cynical view, but it is unfortunately based on practices in the pressure filled research environment that are all too common. There is tremendous pressure to produce “good” results to keep the funding stream alive, as those in the field well know. Just as reported in medically related fields, replication efforts for CFD have often been unsuccessful, but almost always go unpublished because of the lack of incentives to do so. It is sad to have to add that in some cases, senior people in the field can suppress negative results. Some way needs to be found to provide incentives for honest and objective replication efforts and publishing those findings regardless of the opinions of the authors of the method. Priorities somehow need to be realigned toward more scientifically valuable information about robustness and stability of results and addressing uncertainty.

However, I see some promising signs of progress in science. In medicine, recent work shows that reforms can have dramatic effects in improving the quality of the literature. There is a growing recognition of the replication crisis generally and the need to take action to prevent science’s reputation with the public from being irreparably damaged. As simulations move into the arena affecting public safety and health, there will be hopefully increasing scrutiny, healthy skepticism, and more honesty. Palmer and Stevens’ recent paper is an important (and difficult in the politically charged climate field) step forward on a long and difficult road to improved science.

In my opinion those who retard progress in CFD are often involved in “science communication” and “Colorful Fluid Dynamics.” They sometimes view their job as justifying political outcomes by whitewashing high levels of uncertainty and bias or making the story good click bait by exaggerating. Worse still, many act as apologists for “science” or senior researchers and tend to minimize any problems. Nothing could be more effective in producing the exact opposite of the desired outcome, viz., a cynical and disillusioned public already tired of the seemingly endless scary stories about dire consequences often based on nothing more than the pseudo-science of “science communication” of politically motivated narratives. This effect has already played out in medicine where the public and many physicians are already quite skeptical of health advice based on retrospective studies, biased reporting, or slick advertising claiming vague but huge benefits for products or procedures. Unfortunately, bad medical science continues to affect the health of millions and wastes untold billions of dollars. The mechanisms for quantifying the state of the science on any topic, and particularly estimating the often high uncertainties, are very weak. As always in human affairs, complete honesty and directness is the best long term strategy. Particularly for science, which tends to hold itself up as having high authority, the danger is in my view worth addressing urgently. This response is demanded not just by concerns about public perceptions, but also by ethical considerations and simple honesty as well as a regard for the lives and well-being of the consumers of our work who deserve the best information available.