Sunday, January 16, 2022

Stay the f**k away from me - a soliloquy from a college professor

 Here is a link to the professor's video.  It is hilarious - although too many people will take it too seriously and get upset.  To those people, I say - get a life.

Be sure to watch the video before reading Jonathan Turley's take below.

-----------------------------------------------------------

“Stay the f**k away from me”: Professor Placed On Leave After Calling Students “Vectors of Disease” and Promising Random Grading.

Professor Barry Mehler at Ferris State University in Michigan clearly does not want to return to in-person classes. Appearing in a video with a space helmet, Mehler went full Howard Beale in a video in which he called his students “vectors of disease” and tells them to “stay the f**k away from me.” While many have declared Mehler completely insane, his video may be as clever as a covid-phobic fox. Let me explain.

Mehler teaches the history of science and is the founder and director of the Institute for the Study of Academic Racism.

In the video below, Mehler lashes out at the requirement that he return to in-person classes despite the risk to his health as an older person. He is profane, insulting, and taunting.

He is also being clearly sarcastic and waggish at points. For example, he tells the students that he randomly assigns grades at the start of the course because he does not care who they are or what they do in this class: “None of you c**ksuckers are good enough to earn an A in my class. So I randomly assign grades before the first day of class.” However, he later explains how they can earn an A without coming to class if they do the other work.

He uses the pre-written speech (you can see the script when he shares the screen) to attack religion, Western Civilization, America’s legacy, and both the students and the university.

Mehler may set a record for the purely profane in his diatribe:

“I may have f***ed up my life flatter than hammered s***, but I stand before you today beholding to no human c*ksucker,” Mehler says. “I’m working in a paid f***ing union job and no limber-d*ck c*ksucker of an administrator is going to tell me how to teach my classes. Because I’m a f****** tenured professor. So, if you want to go complain to your dean, f*** you, go ahead. I’m retiring at the end of this year and I couldn’t give a flying f*** any longer.”

At one point he declares “[w]hen I look out at a classroom filled with fifty students, I see fifty selfish kids who don’t give a sh*t whether grandpa lives or dies. And if you won’t expose your grandpa to a possible infection with COVID, then stay the f*** away from me.”

It is Howard Beale with a doctorate.

So is this just madness? Perhaps, but I don’t think so. Three clues can be derived from the video. First, there is the fact that this was a pre-written “soliloquy.” It sounds like a spontaneous diatribe but It is a calculated and intentionally worded address. It could be more Machiavellian than Bealean in that sense. While Mehler does call his students “vectors of disease,” he then shows how he took that language loosely from a movie as a teachable moment on plagiarism.

Second, Mehler reveals that he does not want to teach in person. To that end, he encourages students not to come to class and assures them that their grades will not be impacted. Indeed, he strongly suggests that he will look with disfavor on those who appear in this class.

Third, Mehler says that this is his last year before retiring and he has tenure (and union) protections. He encourages the students to complain to the university. Indeed, he almost begs them to do so. They did and the university expressed the predictable shock as it placed him on leave.

So what does that all mean? It could mean that Mehler was trying to get himself put on leave. (Hopefully, he can still return the $300 space helmet). Before the university could fire him, they must investigate him and follow grievance procedures. He will claim that this was an effort of being edgy and humorous. That process could likely take the year and Mehler would simply retire. In the meantime, he and his space helmet can stay at home.

Or he may be crazy.

Wednesday, January 12, 2022

A statistical problem that makes estimates of climate sensitivity to greenhouse gases problematic (unsettled)

 Here is "An Introductory-Level Explanation of my Critique of AT99" by Ross McKitrick.

The message is that the techniques for estimating climate sensitivity to greenhouse gases relied upon by the climate alarmist crowd - including many "climate scientists" is flawed, hence the estimates cannot be taken at face value.

--------------------------------

1 INTRODUCTION
My article in Climate Dynamics shows that the AT99 method is theoretically flawed and gives unreliable results. A careful statement of the implications must note an elementary principle of logic. Remember that, according to logic, we can say “Suppose A implies B; then if A is true therefore B is true.” Example: all dogs have fur; a beagle is a dog; therefore a beagle has fur. But we cannot say “Suppose A implies B; A is not true therefore B is not true.” Example: all dogs have fur; a cat is not a dog, therefore a cat does not have fur. But we can say “Suppose A implies B; A is not true therefore we do not know if B is true.” Example: all dogs have fur; a dolphin is not a dog, therefore we do not know if a dolphin has fur.

In this example “A” is the statistical argument in AT99 which they invoked to prove “B”—the claim that their model yields unbiased and valid results. I showed that “A”, their statistical argument, is not true. So we have no basis to say that their model yields unbiased and valid results. In my article I go further and explain why there are reasons to believe the results will typically be invalid. I also list the conditions needed to prove their claims of validity. I don’t think it can be done, for reasons stated in the paper, but I leave open the possibility. Absent such proof, applications of their method over the past 20 years leave us uninformed about the influence of GHG’s on the climate. Here I will try to explain the main elements of the statistical argument.

2 REGRESSION
Most people are familiar with the idea of drawing a line of best fit through a scatter of data. This is called linear regression. Consider a sample of data showing, for example, wife’s age plotted against the husband’s age.

Missing chart showing scatter diagram showing pattern of dots with obvious lower left to upper right pattern - obvious positive correlation.

Clearly the two are correlated: older men have older wives and vice versa. You can easily picture drawing a straight line of best fit.

The formula for a straight line is π‘Œ = π‘Ž + 𝑏𝑋. Here, Y and X are the names of the variables. In the above example Y stands for wife’s age and X stands for husband’s age. a and b are the coefficients to be estimated. b is the slope coefficient. When you draw the line of best fit you are selecting numerical values for a and b. We may be interested in knowing whether b is positive, which implies that an increase in X is associated with an increase in Y. In the above example it clearly is: any reasonable line through the sample would slope upwards. But in other cases it is not so obvious. For example:

Missing chart showing scatter diagram showing pattern of dots without obvious positive or negative slope - unclear true correlation.

Here a line of best fit would be nearly horizontal, but might slope up. For the purpose of picturing why statistical theory becomes important for interpreting regression analysis it is better to have in mind the above graph rather than the earlier one. We rarely have data sets where the relationship is as obvious as it is in the husband-wife example. We are more often trying to get subtle patterns out of much noisier data.

It can be particularly difficult to tell if slope lines are positive if we are working in multiple dimensions: for instance if we are fitting a line π‘Œ = π‘Ž + 𝑏𝑋 + π‘π‘Š + 𝑑𝑍 through a data set that also has variables W and Z and their coefficients c and d to contend with. Regardless of the model we need some way of testing if the true value of b is definitely positive or not. That requires a bit more theory.

Note that regression models can establish correlation, but correlation is not causation. Older men do not cause their wives to be older; it is just that people who marry tend to be of the same age group. If we found deaths by drowning to be correlated with ice cream consumption, it would not prove that eating ice cream causes drowning. It is more likely that both occur in warm weather, so the onset of summer causes both events to rise at the same time. Regression models can help support interpretations of causality if there are other grounds for making such a connection, but it must be done very cautiously and only after rigorously checking testing whether the model has omitted important explanatory variables.

3 SAMPLING AND VARIANCE
The first example above is a plot of a sample of data. It is clearly not the entire collection of husbands and wives in the world. A sample is a subset of a population. When we do statistical analysis we have to take account of the fact that we are working with a sample rather than the entire population (in principle, the larger the sample, the more representative it is for the entire population). The line of best fit through the sample can only ever yield an estimate of the true value of b. In conventional notation we denote an estimate of b with a ‘hat’, writing it 𝑏̂. Because it is an estimate, we can only really talk about a range of possible values. Regression yields a distribution of possible estimates, some more likely than others. If you fit a line through data using a simple program like Excel it might only report the central slope estimate 𝑏̂ but what the underlying theory yields is a distribution of possible values.

Most people are familiar with the idea of a ‘bell curve’ which summarizes data, like the distribution of grades in a class, where many values are clustered around the mean and the number of observations diminishes as you go further away from the mean. The wideness of a distribution is summarized by a number called the variance. If the variance is low the distribution is narrow and if it is high the distribution is wide:

Missing chart showing narrow and wide bell curves.

Regression analysis yields an estimate both of 𝑏̂ and its variance 𝑣(𝑏̂). A closely related concept is the standard error of 𝑏̂ which is the square root of 𝑣(𝑏̂) and can be denoted with a Greek sigma: 𝜎̂. Statistical theory tells us that, as long as the regression model satisfies a certain set of conditions, there is a 95% probability that the true (population) value of b is inside an interval bounded approximately by 2𝜎̂ above and below 𝑏̂. This is called the 95% Confidence Interval.

Given a sample of data on (in this case) X and Y, we can use regression methods to fit a line π‘Œ = π‘Ž̂ + 𝑏̂𝑋 and if we are confident 𝑏̂ is above zero it implies that an increase in X leads to an increase in Y. “Confident” here means that 𝑏̂ is more than 2𝜎̂ greater than zero. If it isn’t we say that the coefficient is positive but not statistically significant.

4 BIAS, EFFICIENCY AND CONSISTENCY
The value of 𝑏̂ is obtained using a formula that takes in the sample data and pops out a number. There are many formulas that can be used. The most popular one is called Ordinary Least Squares or OLS. It is derived by supposing that the straight line allows us to predict the value of Y that corresponds with each value of X, but there will be an error in each such prediction, and we should choose the values of π‘Ž̂ and 𝑏̂ that minimize the sum of the squared errors. OLS also yields an estimate of the variances of each coefficient.

Expected value is a concept in statistics that refers to a probability-weighted average of a random variable. The expected value of a random variable 𝑔 is denoted 𝐸(𝑔). OLS yields a distribution for 𝑏̂, which means it has an expected value. Statistical theory can be used to show that, as long as the regression model satisfies a certain set of conditions, 𝐸(𝑏̂) = 𝑏. In other words, the expected value is the true value. In this case we say the estimator is unbiased. It is also the case that the variance estimate is unbiased (again as long as the regression model satisfies a certain set of conditions).

Since there are many possible estimation formulas besides OLS, we need to think about why we would prefer OLS to the others. One reason is that, among all the options that yield unbiased estimates, OLS yields the smallest variance.1 So it makes the best use of the available data and gives us the smallest 95% Confidence Interval. We call this efficiency.

Some formulas give us estimated slope coefficients or variances that are biased when the sample size is small, but as the sample size gets larger the bias disappears and the variance goes to zero, so the distribution collapses onto the true value. This is called consistency. An inconsistent estimator has the undesirable property that as we get more and more data we have no assurance that our coefficient estimates get closer to the truth, even if they look like they are getting more precise because the variance is shrinking but does not go to zero.

5 THE GAUSS-MARKOV CONDITIONS AND SPECIFICATION TESTING.
I have several times referred to “a certain set of conditions” that a regression model needs to satisfy in order for OLS to yield unbiased, efficient and consistent estimates. These conditions are listed in any introductory econometrics textbook and they are called the Gauss-Markov (GM) conditions. Much of the field of econometrics (which is a branch of statistics focused on using regression analysis to build economic models) is focused on testing for failures of the GM conditions and proposing remedies when failures are detected.

Some failures of the GM conditions imply that 𝑏̂ will still be unbiased, but its variance estimate is biased. So we might get a decent estimate of the slope coefficient but our judgment of whether it is
significant or not will be unreliable. Other failures of the GM conditions imply that both 𝑏̂ and 𝜎̂ are biased. In this case the analysis may be spurious and totally meaningless.

As an example of a bad research design, suppose we have data from hundreds of US cities over many years showing both the annual number of crimes in the city and the number of police officers on the streets, and we regress the annual number of crimes on the annual number of police officers to test if crime goes down when more police are deployed. There are several problems that would likely lead to multiple GM conditions failing. First, the sample consists of small and large cities together, so the range and dispersion of the data over the sample will vary, which can cause biased variance estimates. Second, there will be lag effects where a change in policing might lead to a change in crime only after a certain amount of time has passed, which can bias the coefficient and variance estimates. Third, while crime may depend on policing, policing levels may also depend on the amount of crime, so both variables are determined by each other: one is not clearly determined outside the model. This can severely bias the coefficients and lead to spurious conclusions (such as that more policing leads to higher crime levels). Finally, both crime and policing depend on factors not included in the model, and unless those outside factors are uncorrelated with the level of policing the coefficient and variance estimates will be biased.

It is therefore critical to test for failures of the GM conditions. There is a huge literature in econometrics on this topic, which is called specification testing. Students who learn regression analysis learn specification testing all the way along. If a regression model is used for economics research, the results would never be taken at face value without at least some elementary specification tests being reported.

There is a class of data transformations that can be used to remedy violations of some GM conditions, and when they are applied we then say we are using Generalized Least Squares or GLS. Having applied a GLS transformation doesn’t mean we can assume the GM conditions automatically hold, they still have to be tested. In some cases a GLS transformation is still not enough and other modifications to the model are needed to achieve unbiasedness and consistency.

6 THE AT99 METHOD
Various authors prior to AT99 had proposed comparing observed climate measures to analogues simulated in climate models with and without GHG’s (which are called “response patterns”) to try to 7 determine if including the effect of GHG’s significantly helps explain the observations, which would then support making an attribution of cause. They refer to their method as “fingerprinting” or “optimal fingerprinting.” Those authors had also argued that the analysis would need to be aided by rescaling the data according to local climatic variability: put more weight on areas where the climate is inherently more stable and less weight on areas where it is “noisier”. To do that required having an estimate of something called the “climate noise covariance matrix” or 𝐢𝑁 which measures the variability of the climate in each location and, for each pair of locations, how their climate conditions correlate with each other. Rather than using observed data to compute 𝐢𝑁 climatologists have long preferred to use climate models. While there were reasons for this choice, it created many problems (which I discuss in my paper). Once 𝐢𝑁 is obtained from a climate model, to compute the required regression weights one needs to do a bit of linear algebra: first compute the inverse of 𝐢𝑁 and then compute the matrix root of the inverse. This would yield a weighting matrix P that would help “extract” information more efficiently from the data set.

One problem the scientists ran into, however, is that climate models don’t have enough resolution to identify all the elements of the 𝐢𝑁 matrix independently. In mathematical terms we say it is “rank deficient”, and an implication is that the inverse of 𝐢𝑁 does not exist. So the scientists chose to use an approximation called a “pseudo-inverse” to compute the needed weights. This created further problems.

7 THE AT99 ERROR AT99 noted that applying a weighting scheme makes the fingerprinting model like a GLS regression. And, they argued, a GLS model satisfies the GM conditions. Therefore the results of this method will be unbiased and efficient. That slightly oversimplifies their argument, but not by much. And the main error is obvious. You can’t know if a model satisfies the GM conditions unless you test for specific violations. AT99 didn’t even state the GM conditions correctly, much less propose any tests for violations.

In fact they derailed the whole idea of specification testing by arguing that one only needs to test that the climate model noise covariance estimates are “reliable” (their term—which they did not define), and they proposed a test statistic which they called the “Residual Consistency” or RC test for that purpose. They didn’t offer any proof that the RC test does what they claimed it does. For example it has nothing to do with showing that the residuals are consistent estimates of the unknown error terms. In fact they didn’t even state precisely what it tests, they only said that if the formula they propose pops out a small number, the fingerprinting regression is valid. In my paper I explained that there can easily be cases where the RC test would yield a small number even in models that are known to be misspecified and unreliable.

And that, with only one slight modification, has been the method used by the climate science profession for 20 years. A large body of literature is based on this flawed methodology. No one noticed the errors in the AT99 discussion of the GM conditions, no one minded the absence of any derivation of the RC test, and none of the hundreds of applications of the AT99 method were subject to conventional specification testing. So we have no basis for accepting any claims that the results of the optimal fingerprinting literature are unbiased or consistent. In fact, as I argued in my paper, the AT99 method as set out in their paper automatically fails at least one GM condition and likely more. So the results have to be assumed to be unreliable.

The slight modification came in 2003 when Myles Allen and a different coauthor, Peter Stott, proposed shifting from GLS to another estimator called Total Least Squares or TLS.2 It still involves using an estimate of 𝐢𝑁 to rescale the data, but the slope coefficients are selected using a different formula. Their rationale for TLS was that the climate model-generated variables in the fingerprinting regression are themselves pretty ‘noisy’ and this can cause GLS to yield coefficient estimates that are biased downwards. This is true, but econometricians deal with this problem using a technique called Instrumental Variables or IV. We don’t use TLS (in fact almost no one outside of climatology uses it) because, among other things, if the regression model is misspecified, TLS over-corrects and imparts an upward bias to the results. It is also extremely inefficient compared to OLS. IV models can be shown to be consistent and unbiased. TLS models can’t, unless the researcher makes some restrictive assumptions about the variances in the data set which themselves can’t be tested; in other words, unless the modeler “assumes the problem away.”

8 IMPLICATIONS AND NEXT STEPS
The AT99 method fails the GM conditions. As a result, its usage (including the TLS variant) yields results which might by chance be right, but in general will be biased and inconsistent and therefore cannot be assumed to be reliable. Nothing in the method itself (including use of the RC test) allows scientists to claim more than that.

The AT99 framework has another important limitation which renders it unsuitable for testing the hypothesis that greenhouse gases cause changes in the climate. The method depends on the assumption that the model which generates the 𝐢𝑁 matrix and the response patterns is a true representation of the climate system. Such data cannot be the basis of a test that contradicts the assumed structure of the climate model. The reason has to do with how hypotheses are tested. Going back to the earlier example of estimating 𝑏̂ and its distribution, statistical theory allows us to construct a test score (which I’ll call t) using the data and the output of the regression analysis which will have a known distribution if the true value of b is zero. If the computed value of t lies way out in the tails of such a distribution then it is likely not consistent with the hypothesis that 𝑏 = 0. In other words, hypothesis testing says “If the true value of b is zero, then the statistic t will be close to the middle of its distribution. If it is not close to the middle, b is probably not zero.”

For this to work requires us to be able to derive the distribution of the test statistic under the hypothesis that the true value of b is zero. In the fingerprinting regression framework suppose b represents the measure of the influence of GHG’s on the climate. The optimal fingerprinting method obliges us to use data generated by climate models to estimate both b and its variance. But climate models are built under the assumption that GHG’s have a large positive effect on the climate, or 𝑏 > 0. So we can’t use that data to estimate the distribution of a statistic under the assumption that 𝑏 = 0. Such a test will be spurious and unreliable.

Tuesday, January 11, 2022

Spreading Omicron May Be Safer

Here is a Wall Street Journal Opinion article by Vivek Ramaswamy and Apoorva Ramaswamy titled "
Slow the Spread? Speeding It May Be Safer".

I suspect these guys are right.  If so, it provides another example of the incompetence, or worse, of Government and a whole bunch of other organizations and people.
-----------------------------------------
The Omicron variant is spreading across the globe, but so far the strain appears to be less deadly than its predecessors. That’s good news, but here’s a risk that policy makers in every country should appreciate: Policies designed to slow the spread of Omicron may end up creating a supervariant that is more infectious, more virulent and more resistant to vaccines. That would be a man-made disaster.
To minimize that risk, policy makers must tolerate the rapid spread of milder variants. This will require difficult trade-offs, but it will save lives in the long run. We should end mask mandates and social distancing in most settings not because they don’t slow the spread—the usual argument against such measures—but because they probably do.

To understand why, first consider an important scientific distinction, between antigenic drift and antigenic shift. Antigens are molecules—such as the spike protein of SARS-CoV-2—that an immune system detects as foreign. The host immune system then mounts a response.

“Antigenic drift” describes the process by which single-point mutations (small genetic errors) randomly occur during the viral replication process. The result is minor alterations to antigens such as the spike protein. If a point mutation makes the virus less likely to survive, that variant gradually dies off. But if the mutation confers an incremental survival advantage—say, the ability to spread more quickly from one cell to another—then that strain becomes more likely to spread through the population.

Antigenic drift is a gradual, varying process: A single-point mutation alters one peptide, or building block, of a larger protein. Hosts with immunity against a prior strain generally enjoy at least partial immunity against “drifted” variants. This is called “cross-protection.”

Each time an immune host is exposed to a slightly different antigenic variant, the host can tweak its immune response without becoming severely ill. And the more similar the new strain is to the last version the person fought off, the less risky that strain will be to the host.

By contrast, “antigenic shift” refers to a discontinuous quantum leap from one antigen (or set of antigens) to a very different antigen (or set of antigens). New viral strains—such as those that jump from one species to another—tend to emerge from antigenic shift. The biological causes of antigenic shift are often different from those of antigenic drift. For example, the physical swap of whole sections of the genome leads to more significant changes to viral genes than those caused by individual point mutations.

But there’s a sorites paradox: How many unique point mutations collectively constitute an antigenic shift, especially when human hosts are deprived of opportunities to update their immune response to “drifted” variants?

Vaccinated and naturally immune people can revamp their immune response to new viral strains created by antigenic drift. Yet social distancing and masking increase the risk of vaccine-resistant strains from antigenic shift by minimizing opportunities for the vaccinated and naturally immune to tailor their immune responses through periodic exposures to incrementally “drifted” variants.

This is a familiar notion in virology. Take the rise of severe shingles cases over the past decade, partly a result of the widespread use of the chickenpox vaccine. Shingles and chickenpox are caused by the same virus. Before widespread use of the chickenpox vaccine, parents regularly updated their own immunity by getting exposed to chickenpox from their children, or from other adults who were exposed by children. But now that most children are vaccinated against chickenpox and don’t contract it, older adults suffer from more severe cases of shingles.

The absolute risk of a more virulent strain of SARS-CoV-2 is low. That’s because viruses “care” more about propagating themselves than about killing the host: Most viruses evolve to become more infectious and less virulent. But this is only a rule of thumb, not a biological law. Like any trend, we should expect a distribution of outcomes around the modal one—and the more iterations you allow, the more likely you are to get an unlikely outcome. Enforcing social-distancing policies amid widespread vaccination makes the emergence of a vaccine-resistant superstrain more likely.

Why not prepare for this outcome simply by developing new vaccines against novel strains more quickly? Because even mRNA vaccines can’t be developed fast enough to outrun a vaccine-resistant supervariant. On Dec. 8, Pfizer committed to delivering its first batch of new vaccines that cover the Omicron variant within 100 days. Yet by mid-March, a significant percentage of the U.S. population will have already been infected with Omicron.

Meanwhile, mask mandates and social-distancing measures will have created fertile ground for new variants that evade vaccination even more effectively. Significant antigenic shifts may create new strains that are increasingly difficult to target with vaccines at all. There are no vaccines for many viruses, despite decades of effort to develop them.

Will relaxing restrictions come at the cost of more hospitalizations and deaths as the next variant starts to spread? Perhaps, but it would reduce the risk of a worst-case scenario and greater loss of life in the long run.

The most important step in fighting the Covid-19 pandemic was the distribution of vaccines. With this milestone now achieved, the global response should shift from preventing the spread to minimizing the probability of an antigenic shift. Whether SARS-CoV-2 was made in a lab is the subject of debate, but let’s make sure we don’t manufacture an even more dangerous strain of the virus with misguided policies.

Monday, January 10, 2022

The 60-Year-Old Scientific Screwup That Helped Covid Kill

 Here is a great article by Megan Molteni at wired.com.

Another case of experts being both wrong and obstinate - both in and out of Government.

---------------------------------------

EARLY ONE MORNING, Linsey Marr tiptoed to her dining room table, slipped on a headset, and fired up Zoom. On her computer screen, dozens of familiar faces began to appear. She also saw a few people she didn’t know, including Maria Van Kerkhove, the World Health Organization’s technical lead for Covid-19, and other expert advisers to the WHO. It was just past 1 pm Geneva time on April 3, 2020, but in Blacksburg, Virginia, where Marr lives with her husband and two children, dawn was just beginning to break.

Marr is an aerosol scientist at Virginia Tech and one of the few in the world who also studies infectious diseases. To her, the new coronavirus looked as if it could hang in the air, infecting anyone who breathed in enough of it. For people indoors, that posed a considerable risk. But the WHO didn’t seem to have caught on. Just days before, the organization had tweeted “FACT: #COVID19 is NOT airborne.” That’s why Marr was skipping her usual morning workout to join 35 other aerosol scientists. They were trying to warn the WHO it was making a big mistake.

Over Zoom, they laid out the case. They ticked through a growing list of superspreading events in restaurants, call centers, cruise ships, and a choir rehearsal, instances where people got sick even when they were across the room from a contagious person. The incidents contradicted the WHO’s main safety guidelines of keeping 3 to 6 feet of distance between people and frequent handwashing. If SARS-CoV-2 traveled only in large droplets that immediately fell to the ground, as the WHO was saying, then wouldn’t the distancing and the handwashing have prevented such outbreaks? Infectious air was the more likely culprit, they argued. But the WHO’s experts appeared to be unmoved. If they were going to call Covid-19 airborne, they wanted more direct evidence—proof, which could take months to gather, that the virus was abundant in the air. Meanwhile, thousands of people were falling ill every day.

On the video call, tensions rose. At one point, Lidia Morawska, a revered atmospheric physicist who had arranged the meeting, tried to explain how far infectious particles of different sizes could potentially travel. One of the WHO experts abruptly cut her off, telling her she was wrong, Marr recalls. His rudeness shocked her. “You just don’t argue with Lidia about physics,” she says.

Morawska had spent more than two decades advising a different branch of the WHO on the impacts of air pollution. When it came to flecks of soot and ash belched out by smokestacks and tailpipes, the organization readily accepted the physics she was describing—that particles of many sizes can hang aloft, travel far, and be inhaled. Now, though, the WHO’s advisers seemed to be saying those same laws didn’t apply to virus-laced respiratory particles. To them, the word airborne only applied to particles smaller than 5 microns. Trapped in their group-specific jargon, the two camps on Zoom literally couldn’t understand one another.

When the call ended, Marr sat back heavily, feeling an old frustration coiling tighter in her body. She itched to go for a run, to pound it out footfall by footfall into the pavement. “It felt like they had already made up their minds and they were just entertaining us,” she recalls. Marr was no stranger to being ignored by members of the medical establishment. Often seen as an epistemic trespasser, she was used to persevering through skepticism and outright rejection. This time, however, so much more than her ego was at stake. The beginning of a global pandemic was a terrible time to get into a fight over words. But she had an inkling that the verbal sparring was a symptom of a bigger problem—that outdated science was underpinning public health policy. She had to get through to them. But first, she had to crack the mystery of why their communication was failing so badly.

MARR SPENT THE first many years of her career studying air pollution, just as Morawska had. But her priorities began to change in the late 2000s, when Marr sent her oldest child off to day care. That winter, she noticed how waves of runny noses, chest colds, and flu swept through the classrooms, despite the staff’s rigorous disinfection routines. “Could these common infections actually be in the air?” she wondered. Marr picked up a few introductory medical textbooks to satisfy her curiosity.

According to the medical canon, nearly all respiratory infections transmit through coughs or sneezes: Whenever a sick person hacks, bacteria and viruses spray out like bullets from a gun, quickly falling and sticking to any surface within a blast radius of 3 to 6 feet. If these droplets alight on a nose or mouth (or on a hand that then touches the face), they can cause an infection. Only a few diseases were thought to break this droplet rule. Measles and tuberculosis transmit a different way; they’re described as “airborne.” Those pathogens travel inside aerosols, microscopic particles that can stay suspended for hours and travel longer distances. They can spread when contagious people simply breathe.

The distinction between droplet and airborne transmission has enormous consequences. To combat droplets, a leading precaution is to wash hands frequently with soap and water. To fight infectious aerosols, the air itself is the enemy. In hospitals, that means expensive isolation wards and N95 masks for all medical staff.

The books Marr flipped through drew the line between droplets and aerosols at 5 microns. A micron is a unit of measurement equal to one-millionth of a meter. By this definition, any infectious particle smaller than 5 microns in diameter is an aerosol; anything bigger is a droplet. The more she looked, the more she found that number. The WHO and the US Centers for Disease Control and Prevention also listed 5 microns as the fulcrum on which the droplet-aerosol dichotomy toggled.

There was just one literally tiny problem: “The physics of it is all wrong,” Marr says. That much seemed obvious to her from everything she knew about how things move through air. Reality is far messier, with particles much larger than 5 microns staying afloat and behaving like aerosols, depending on heat, humidity, and airspeed. “I’d see the wrong number over and over again, and I just found that disturbing,” she says. The error meant that the medical community had a distorted picture of how people might get sick.

Epidemiologists have long observed that most respiratory bugs require close contact to spread. Yet in that small space, a lot can happen. A sick person might cough droplets onto your face, emit small aerosols that you inhale, or shake your hand, which you then use to rub your nose. Any one of those mechanisms might transmit the virus. “Technically, it’s very hard to separate them and see which one is causing the infection,” Marr says. For long-distance infections, only the smallest particles could be to blame. Up close, though, particles of all sizes were in play. Yet, for decades, droplets were seen as the main culprit.

Marr decided to collect some data of her own. Installing air samplers in places such as day cares and airplanes, she frequently found the flu virus where the textbooks said it shouldn’t be—hiding in the air, most often in particles small enough to stay aloft for hours. And there was enough of it to make people sick.

In 2011, this should have been major news. Instead, the major medical journals rejected her manuscript. Even as she ran new experiments that added evidence to the idea that influenza was infecting people via aerosols, only one niche publisher, The Journal of the Royal Society Interface, was consistently receptive to her work. In the siloed world of academia, aerosols had always been the domain of engineers and physicists, and pathogens purely a medical concern; Marr was one of the rare people who tried to straddle the divide. “I was definitely fringe,” she says.

Thinking it might help her overcome this resistance, she’d try from time to time to figure out where the flawed 5-micron figure had come from. But she always got stuck. The medical textbooks simply stated it as fact, without a citation, as if it were pulled from the air itself. Eventually she got tired of trying, her research and life moved on, and the 5-micron mystery faded into the background. Until, that is, December 2019, when a paper crossed her desk from the lab of Yuguo Li.

An indoor-air researcher at the University of Hong Kong, Li had made a name for himself during the first SARS outbreak, in 2003. His investigation of an outbreak at the Amoy Gardens apartment complex provided the strongest evidence that a coronavirus could be airborne. But in the intervening decades, he’d also struggled to convince the public health community that their risk calculus was off. Eventually, he decided to work out the math. Li’s elegant simulations showed that when a person coughed or sneezed, the heavy droplets were too few and the targets—an open mouth, nostrils, eyes—too small to account for much infection. Li’s team had concluded, therefore, that the public health establishment had it backward and that most colds, flu, and other respiratory illnesses must spread through aerosols instead.

Their findings, they argued, exposed the fallacy of the 5-micron boundary. And they’d gone a step further, tracing the number back to a decades-old document the CDC had published for hospitals. Marr couldn’t help but feel a surge of excitement. A journal had asked her to review Li’s paper, and she didn’t mask her feelings as she sketched out her reply. On January 22, 2020, she wrote, “This work is hugely important in challenging the existing dogma about how infectious disease is transmitted in droplets and aerosols.”

Even as she composed her note, the implications of Li’s work were far from theoretical. Hours later, Chinese government officials cut off any travel in and out of the city of Wuhan, in a desperate attempt to contain an as-yet-unnamed respiratory disease burning through the 11-million-person megalopolis. As the pandemic shut down country after country, the WHO and the CDC told people to wash their hands, scrub surfaces, and maintain social distance. They didn’t say anything about masks or the dangers of being indoors.

A FEW DAYS after the April Zoom meeting with the WHO, Marr got an email from another aerosol scientist who had been on the call, an atmospheric chemist at the University of Colorado Boulder named Jose-Luis Jimenez. He’d become fixated on the WHO recommendation that people stay 3 to 6 feet apart from one another. As far as he could tell, that social distancing guideline seemed to be based on a few studies from the 1930s and ’40s. But the authors of those experiments actually argued for the possibility of airborne transmission, which by definition would involve distances over 6 feet. None of it seemed to add up.

Marr told him about her concerns with the 5-micron boundary and suggested that their two issues might be linked. If the 6-foot guideline was built off of an incorrect definition of droplets, the 5-micron error wasn’t just some arcane detail. It seemed to sit at the heart of the WHO’s and the CDC’s flawed guidance. Finding its origin suddenly became a priority. But to hunt it down, Marr, Jimenez, and their collaborators needed help. They needed a historian.

Luckily, Marr knew one, a Virginia Tech scholar named Tom Ewing who specialized in the history of tuberculosis and influenza. They talked. He suggested they bring on board a graduate student he happened to know who was good at this particular form of forensics. The team agreed. “This will be very interesting,” Marr wrote in an email to Jimenez on April 13. “I think we’re going to find a house of cards.”

The graduate student in question was Katie Randall. Covid had just dealt her dissertation a big blow—she could no longer conduct in-person research, so she’d promised her adviser she would devote the spring to sorting out her dissertation and nothing else. But then an email from Ewing arrived in her inbox describing Marr’s quest and the clues her team had so far unearthed, which were “layered like an archaeology site, with shards that might make up a pot,” he wrote. That did it. She was in.

Randall had studied citation tracking, a type of scholastic detective work where the clues aren’t blood sprays and stray fibers but buried references to long-ago studies, reports, and other records. She started digging where Li and the others had left off—with various WHO and CDC papers. But she didn’t find any more clues than they had. Dead end.

She tried another tack. Everyone agreed that tuberculosis was airborne. So she plugged “5 microns” and “tuberculosis” into a search of the CDC’s archives. She scrolled and scrolled until she reached the earliest document on tuberculosis prevention that mentioned aerosol size. It cited an out-of-print book written by a Harvard engineer named William Firth Wells. Published in 1955, it was called Airborne Contagion and Air Hygiene. A lead!

In the Before Times, she would have acquired the book through interlibrary loan. With the pandemic shutting down universities, that was no longer an option. On the wilds of the open internet, Randall tracked down a first edition from a rare book seller for $500—a hefty expense for a side project with essentially no funding. But then one of the university’s librarians came through and located a digital copy in Michigan. Randall began to dig in.

In the words of Wells’ manuscript, she found a man at the end of his career, rushing to contextualize more than 23 years of research. She started reading his early work, including one of the studies Jimenez had mentioned. In 1934, Wells and his wife, Mildred Weeks Wells, a physician, analyzed air samples and plotted a curve showing how the opposing forces of gravity and evaporation acted on respiratory particles. The couple’s calculations made it possible to predict the time it would take a particle of a given size to travel from someone’s mouth to the ground. According to them, particles bigger than 100 microns sank within seconds. Smaller particles stayed in the air. Randall paused at the curve they’d drawn. To her, it seemed to foreshadow the idea of a droplet-aerosol dichotomy, but one that should have pivoted around 100 microns, not 5.

The book was long, more than 400 pages, and Randall was still on the hook for her dissertation. She was also helping her restless 6-year-old daughter navigate remote kindergarten, now that Covid had closed her school. So it was often not until late at night, after everyone had gone to bed, that she could return to it, taking detailed notes about each day’s progress.

One night she read about experiments Wells did in the 1940s in which he installed air-disinfecting ultraviolet lights inside schools. In the classrooms with UV lamps installed, fewer kids came down with the measles. He concluded that the measles virus must have been in the air. Randall was struck by this. She knew that measles didn’t get recognized as an airborne disease until decades later. What had happened?

Part of medical rhetoric is understanding why certain ideas take hold and others don’t. So as spring turned to summer, Randall started to investigate how Wells’ contemporaries perceived him. That’s how she found the writings of Alexander Langmuir, the influential chief epidemiologist of the newly established CDC. Like his peers, Langmuir had been brought up in the Gospel of Personal Cleanliness, an obsession that made handwashing the bedrock of US public health policy. He seemed to view Wells’ ideas about airborne transmission as retrograde, seeing in them a slide back toward an ancient, irrational terror of bad air—the “miasma theory” that had prevailed for centuries. Langmuir dismissed them as little more than “interesting theoretical points.”

But at the same time, Langmuir was growing increasingly preoccupied by the threat of biological warfare. He worried about enemies carpeting US cities in airborne pathogens. In March 1951, just months after the start of the Korean War, Langmuir published a report in which he simultaneously disparaged Wells’ belief in airborne infection and credited his work as being foundational to understanding the physics of airborne infection.

How curious, Randall thought. She kept reading.

In the report, Langmuir cited a few studies from the 1940s looking at the health hazards of working in mines and factories, which showed the mucus of the nose and throat to be exceptionally good at filtering out particles bigger than 5 microns. The smaller ones, however, could slip deep into the lungs and cause irreversible damage. If someone wanted to turn a rare and nasty pathogen into a potent agent of mass infection, Langmuir wrote, the thing to do would be to formulate it into a liquid that could be aerosolized into particles smaller than 5 microns, small enough to bypass the body’s main defenses. Curious indeed. Randall made a note.

When she returned to Wells’ book a few days later, she noticed he too had written about those industrial hygiene studies. They had inspired Wells to investigate what role particle size played in the likelihood of natural respiratory infections. He designed a study using tuberculosis-causing bacteria. The bug was hardy and could be aerosolized, and if it landed in the lungs, it grew into a small lesion. He exposed rabbits to similar doses of the bacteria, pumped into their chambers either as a fine (smaller than 5 microns) or coarse (bigger than 5 microns) mist. The animals that got the fine treatment fell ill, and upon autopsy it was clear their lungs bulged with lesions. The bunnies that received the coarse blast appeared no worse for the wear.

For days, Randall worked like this—going back and forth between Wells and Langmuir, moving forward and backward in time. As she got into Langmuir’s later writings, she observed a shift in his tone. In articles he wrote up until the 1980s, toward the end of his career, he admitted he had been wrong about airborne infection. It was possible.

A big part of what changed Langmuir’s mind was one of Wells’ final studies. Working at a VA hospital in Baltimore, Wells and his collaborators had pumped exhaust air from a tuberculosis ward into the cages of about 150 guinea pigs on the building’s top floor. Month after month, a few guinea pigs came down with tuberculosis. Still, public health authorities were skeptical. They complained that the experiment lacked controls. So Wells’ team added another 150 animals, but this time they included UV lights to kill any germs in the air. Those guinea pigs stayed healthy. That was it, the first incontrovertible evidence that a human disease—tuberculosis—could be airborne, and not even the public health big hats could ignore it.

The groundbreaking results were published in 1962. Wells died in September of the following year. A month later, Langmuir mentioned the late engineer in a speech to public health workers. It was Wells, he said, that they had to thank for illuminating their inadequate response to a growing epidemic of tuberculosis. He emphasized that the problematic particles—the ones they had to worry about—were smaller than 5 microns.

Inside Randall’s head, something snapped into place. She shot forward in time, to that first tuberculosis guidance document where she had started her investigation. She had learned from it that tuberculosis is a curious critter; it can only invade a subset of human cells in the deepest reaches of the lungs. Most bugs are more promiscuous. They can embed in particles of any size and infect cells all along the respiratory tract.


What must have happened, she thought, was that after Wells died, scientists inside the CDC conflated his observations. They plucked the size of the particle that transmits tuberculosis out of context, making 5 microns stand in for a general definition of airborne spread. Wells’ 100-micron threshold got left behind. “You can see that the idea of what is respirable, what stays airborne, and what is infectious are all being flattened into this 5-micron phenomenon,” Randall says. Over time, through blind repetition, the error sank deeper into the medical canon. The CDC did not respond to multiple requests for comment.

In June, she Zoomed into a meeting with the rest of the team to share what she had found. Marr almost couldn’t believe someone had cracked it. “It was like, ‘Oh my gosh, this is where the 5 microns came from?!’” After all these years, she finally had an answer. But getting to the bottom of the 5-micron myth was only the first step. Dislodging it from decades of public health doctrine would mean convincing two of the world’s most powerful health authorities not only that they were wrong but that the error was incredibly—and urgently—consequential.

WHILE RANDALL WAS digging through the past, her collaborators were planning a campaign. In July, Marr and Jimenez went public, signing their names to an open letter addressed to public health authorities, including the WHO. Along with 237 other scientists and physicians, they warned that without stronger recommendations for masking and ventilation, airborne spread of SARS-CoV-2 would undermine even the most vigorous testing, tracing, and social distancing efforts.

The news made headlines. And it provoked a strong backlash. Prominent public health personalities rushed to defend the WHO. Twitter fights ensued. Saskia Popescu, an infection-prevention epidemiologist who is now a biodefense professor at George Mason University, was willing to buy the idea that people were getting Covid by breathing in aerosols, but only at close range. That’s not airborne in the way public health people use the word. “It’s a very weighted term that changes how we approach things,” she says. “It’s not something you can toss around haphazardly.”

Days later, the WHO released an updated scientific brief, acknowledging that aerosols couldn’t be ruled out, especially in poorly ventilated places. But it stuck to the 3- to 6-foot rule, advising people to wear masks indoors only if they couldn’t keep that distance. Jimenez was incensed. “It is misinformation, and it is making it difficult for ppl to protect themselves,” he tweeted about the update. “E.g. 50+ reports of schools, offices forbidding portable HEPA units because of @CDCgov and @WHO downplaying aerosols.”

While Jimenez and others sparred on social media, Marr worked behind the scenes to raise awareness of the misunderstandings around aerosols. She started talking to Kimberly Prather, an atmospheric chemist at UC San Diego, who had the ear of prominent public health leaders within the CDC and on the White House Covid Task Force. In July, the two women sent slides to Anthony Fauci, director of the National Institutes of Allergy and Infectious Diseases. One of them showed the trajectory of a 5-micron particle released from the height of the average person’s mouth. It went farther than 6 feet—hundreds of feet farther. A few weeks later, speaking to an audience at Harvard Medical School, Fauci admitted that the 5-micron distinction was wrong—and had been for years. “Bottom line is, there is much more aerosol than we thought,” he said. (Fauci declined to be interviewed for this story.)

Still, the droplet dogma reigned. In early October, Marr and a group of scientists and doctors published a letter in Science urging everyone to get on the same page about how infectious particles move, starting with ditching the 5-micron threshold. Only then could they provide clear and effective advice to the public. That same day, the CDC updated its guidance to acknowledge that SARS-CoV-2 can spread through long-lingering aerosols. But it didn’t emphasize them.

That winter, the WHO also began to talk more publicly about aerosols. On December 1, the organization finally recommended that everyone always wear a mask indoors wherever Covid-19 is spreading. In an interview, the WHO’s Maria Van Kerkhove said that the change reflects the organization’s commitment to evolving its guidance when the scientific evidence compels a change. She maintains that the WHO has paid attention to airborne transmission from the beginning—first in hospitals, then at places such as bars and restaurants. “The reason we’re promoting ventilation is that this virus can be airborne,” Van Kerkhove says. But because that term has a specific meaning in the medical community, she admits to avoiding it—and emphasizing instead the types of settings that pose the biggest risks. Does she think that decision has harmed the public health response, or cost lives? No, she says. “People know what they need to do to protect themselves.”

Yet she admits it may be time to rethink the old droplet-airborne dichotomy. According to Van Kerkhove, the WHO plans to formally review its definitions for describing disease transmission in 2021.

For Yuguo Li, whose work had so inspired Marr, these moves have given him a sliver of hope. “Tragedy always teaches us something,” he says. The lesson he thinks people are finally starting to learn is that airborne transmission is both more complicated and less scary than once believed. SARS-CoV-2, like many respiratory diseases, is airborne, but not wildly so. It isn’t like measles, which is so contagious it infects 90 percent of susceptible people exposed to someone with the virus. And the evidence hasn’t shown that the coronavirus often infects people over long distances. Or in well-ventilated spaces. The virus spreads most effectively in the immediate vicinity of a contagious person, which is to say that most of the time it looks an awful lot like a textbook droplet-based pathogen.

For most respiratory diseases, not knowing which route caused an infection has not been catastrophic. But the cost has not been zero. Influenza infects millions each year, killing between 300,000 and 650,000 globally. And epidemiologists are predicting the next few years will bring particularly deadly flu seasons. Li hopes that acknowledging this history—and how it hindered an effective global response to Covid-19—will allow good ventilation to emerge as a central pillar of public health policy, a development that would not just hasten the end of this pandemic but beat back future ones.

To get a glimpse into that future, you need only peek into the classrooms where Li teaches or the Crossfit gym where Marr jumps boxes and slams medicine balls. In the earliest days of the pandemic, Li convinced the administrators at the University of Hong Kong to spend most of its Covid-19 budget on upgrading the ventilation in buildings and buses rather than on things such as mass Covid testing of students. Marr reviewed blueprints and HVAC schematics with the owner of her gym, calculating the ventilation rates and consulting on a redesign that moved workout stations outside and near doors that were kept permanently open. To date, no one has caught Covid at the gym. Li’s university, a school of 30,000 students, has recorded a total of 23 Covid-19 cases. Of course Marr’s gym is small, and the university benefited from the fact that Asian countries, scarred by the 2003 SARS epidemic, were quick to recognize aerosol transmission. But Marr's and Li’s swift actions could well have improved their odds. Ultimately, that’s what public health guidelines do: They tilt people and places closer to safety.

ON FRIDAY, APRIL 30, the WHO quietly updated a page on its website. In a section on how the coronavirus gets transmitted, the text now states that the virus can spread via aerosols as well as larger droplets. As Zeynep Tufekci noted in The New York Times, perhaps the biggest news of the pandemic passed with no news conference, no big declaration. If you weren’t paying attention, it was easy to miss.

But Marr was paying attention. She couldn’t help but note the timing. She, Li, and two other aerosol scientists had just published an editorial in The BMJ, a top medical journal, entitled “Covid-19 Has Redefined Airborne Transmission.” For once, she hadn’t had to beg; the journal’s editors came to her. And her team had finally posted their paper on the origins of the 5-micron error to a public preprint server.

In early May, the CDC made similar changes to its Covid-19 guidance, now placing the inhalation of aerosols at the top of its list of how the disease spreads. Again though, no news conference, no press release. But Marr, of course, noticed. That evening, she got in her car to pick up her daughter from gymnastics. She was alone with her thoughts for the first time all day. As she waited at a red light, she suddenly burst into tears. Not sobbing, but unable to stop the hot stream of tears pouring down her face. Tears of exhaustion, and relief, but also triumph. Finally, she thought, they’re getting it right, because of what we’ve done.

The light turned. She wiped the tears away. Someday it would all sink in, but not today. Now, there were kids to pick up and dinner to eat. Something approaching normal life awaited.

Lack of scientific integrity in the White House

 Here is Roger Pielke at substack.com.

It's always important to remember that scientists are just people.  Their expertise makes it possible for them to provide useful information and ideas - but also makes it possible for them to deceive more effectively.

--------------------------------------------

Recently, the Proceedings of the National Academy of Sciences (PNAS) retracted a highly influential paper on marine protected areas and fishing due to identification of significant errors that undercut the paper’s results as well as significant irregularities in the peer review process. What makes this particular retraction of unusual interest is that the irregularities in the PNAS peer review process involve Dr. Jane Lubchenco, the White House official who is currently overseeing President Biden’s Scientific Integrity Task Force.

The paper, A global network of marine protected areas for food (Cabral et al. 2020, hereafter C20), was published by PNAS in October 2020, and has been highly reported on due to its perceived policy relevance. Dr. Lubchenco served as its editor for PNAS. That means that she was responsible for overseeing the paper’s journey through the peer review process, including the selection of reviewers. We now know that Dr. Lubchenco violated PNAS guidelines for conflict of interest, and not unknowingly or in a small way.

The details matter here, so I am going to explain them.

The issues in peer review of C20 at PNAS are not subtle. The authors of C20 included seven researchers who Dr. Lubchenco was collaborating with on a different paper — Sala et al. 2021 (published in Nature, hereafter S21) — that built upon the results of C20. Even though C20 was published first (26 Oct 2020 vs 17 Mar 2021), S21 was actually submitted three weeks prior to C20 (17 Dec 2019 vs. 6 Jan 2020). So at the time that Dr. Lubchenco assumed the role of editor for C20, she had just submitted a different paper with seven authors of C20 that built upon the S21 results — S21 thus depended upon the successful publication of C20.

Already, this is an egregious violation of scientific integrity. It gets worse. One of Dr. Lubchenco’s co-authors on S21 who was also a co-author of S20 — the paper she was editing for PNAS — was her brother-in-law. These various conflicts were called to the attention of PNAS in April 2021 by Dr. Magnus Johnson, of the University of Hull in the UK, prompting an investigation.

The PNAS guidelines are completely clear (emphasis added):

A competing interest due to a personal association arises if you are asked to serve as editor or reviewer of a manuscript whose authors include a person with whom you had an association, such as a thesis advisor (or advisee), postdoctoral mentor (or mentee), or coauthor of a paper, within the last 48 months. When such a competing interest arises, you may not serve as editor or reviewer.

A competing interest due to personal association also arises if you are asked to serve as editor or reviewer of a manuscript whose authors include a person with whom you have a family relationship, such as a spouse, domestic partner, or parent–child relationship. When such a competing interest arises, you may not serve as editor or reviewer.


While Dr. Lubchenco should not have edited C20, to be completely fair to her, the “competing interest due to a personal association” guideline does not appear to be much enforced by PNAS. I was able to quickly identify multiple violations of this guideline via a simple search — here are one, two, three examples — the third of which also involves Dr. Lubchenco as editor. If PNAS retracts papers that violate its competing interests guidelines, they will no doubt find a rather large set of papers.

There is more to the story.

On November 17, 2020 Dr. Lubchenco testified before the House Natural Resources Committee, in support of Congressional legislation to establish protected areas in marine ecosystems, relying on C20. She did not disclose that she had shepherded the paper through peer review, nor did she disclose that she was a collaborator in the research. The unavoidable impression that this sequence of events gives is the creation of “policy-based evidence” — that is, evidence that is created for the explicit purpose of supporting particular policy or political outcomes, like the passing of legislation.

The impression of “policy-based evidence” is further supported by the fact that the failures of the peer review process in this instance are not just procedural, they are substantive as well. It turns out that the science of C20 is also fatally flawed. In an excellent and comprehensive post, Max Mossler goes into detail on the errors of C20, how they were identified, and how they also call into question the validity of S21 — which has also received an incredible amount of media and policy attention. Here I’ll just report his excellent bottom line:

Regardless of any conflict of interest, the science in both Cabral et al. and Sala et al. is critically flawed, but being used to advocate for public policy. Both follow a recent trend of publishing predictions that use a limited set of assumptions (in a very uncertain world) to produce global maps that get published in high-profile journals and garner considerable media and political attention.

Computer models are essential tools for science and management, but the accuracy of their predictions depends on both the quality of the data and the assumptions they are based on. Often, a problem is so complex that several assumptions may be equally plausible; readers need to be made aware when different assumptions lead to vastly different outcomes.

The Cabral et al. and Sala et al. papers disregard uncertainty in favor of set values for their model parameters. They don’t account for the enormous uncertainty in these parameters and don’t provide strong evidence that their choice of values was correct. The assumptions and parameters produce big headlines, but are fundamentally unhelpful for the future of ocean governance and sustainability. We expect policy-makers and resource managers to make decisions based on the best available science. Inconsistent and unrealistic assumptions are not that.


And if all of that is not bad enough, it still gets worse. S21 reports (inaccurately) that its projections are based on the IPCC SRES A2 scenario — which for anyone who knows anything about climate scenarios, would have been an incredibly odd choice, not least because that scenario is more than 20 years old and rarely (if ever) used in research today. It turns out (if you dig deep enough) that C20 and S21 are in fact not based on the IPCC SRES A2 scenario, but instead on the implausible RCP8.5. That the authors don’t know the difference between A2 and RCP8.5 is itself problematic. That RCP8.5 is being used to generate predictions for use in marine/fisheries policy is even more problematic.

So we have quite a mess here. Going forward, here are some recommendations:

  • Nature should immediately evaluate S21 for retraction, as it is based on C20, which is now retracted. It is difficult to see how S21 can stand unretracted.
  • PNAS properly retracted C20, but the journal should also do a comprehensive audit to assess the extent of other violations of its conflict of interest guidelines. A cursory look suggests that such violations are not uncommon.
  • Given Dr. Lubchenco’s significant violations of PNAS policies to publish flawed research, and then using that flawed research to advocate for policy, the White House should reconsider her leadership role in its Scientific Integrity Task Force. Otherwise, it would be fair to ask if scientific integrity guidelines are optional, depending on your politics.
This episode provides good news and bad news. The good news is that it underscores that science is indeed self-correcting, even if that process takes a while. In the long run, better science defeats bad science. The bad news is that in the short term, leadership and institutions failed. This episode of fishy science is not over yet — PNAS, Nature and the White House still have important roles to play in ensuring scientific integrity. Watch this space.

Winter is coming: Researchers uncover the surprising cause of the little ice age

 From sciencedaily.com.

The link to the paper is here.

-----------------------------------------------

New research from the University of Massachusetts Amherst provides a novel answer to one of the persistent questions in historical climatology, environmental history and the earth sciences: what caused the Little Ice Age? The answer, we now know, is a paradox: warming.

The Little Ice Age was one of the coldest periods of the past 10,000 years, a period of cooling that was particularly pronounced in the North Atlantic region. This cold spell, whose precise timeline scholars debate, but which seems to have set in around 600 years ago, was responsible for crop failures, famines and pandemics throughout Europe, resulting in misery and death for millions. To date, the mechanisms that led to this harsh climate state have remained inconclusive. However, a new paper published recently in Science Advances gives an up-to-date picture of the events that brought about the Little Ice Age. Surprisingly, the cooling appears to have been triggered by an unusually warm episode.

When lead author Francois Lapointe, postdoctoral researcher and lecturer in geosciences at UMass Amherst and Raymond Bradley, distinguished professor in geosciences at UMass Amherst began carefully examining their 3,000-year reconstruction of North Atlantic sea surface temperatures, results of which were published in the Proceedings of the National Academy of Sciences in 2020, they noticed something surprising: a sudden change from very warm conditions in the late 1300s to unprecedented cold conditions in the early 1400s, only 20 years later.

Using many detailed marine records, Lapointe and Bradley discovered that there was an abnormally strong northward transfer of warm water in the late 1300s which peaked around 1380. As a result, the waters south of Greenland and the Nordic Seas became much warmer than usual. "No one has recognized this before," notes Lapointe.

Normally, there is always a transfer of warm water from the tropics to the Arctic. It's a well-known process called the Atlantic Meridional Overturning Circulation (AMOC), which is like a planetary conveyor belt. Typically, warm water from the tropics flows north along the coast of Northern Europe, and when it reaches higher latitudes and meets colder Arctic waters, it loses heat and becomes denser, causing the water to sink at the bottom of the ocean. This deep-water formation then flows south along the coast of North America and continues on to circulate around the world.

But in the late 1300s, AMOC strengthened significantly, which meant that far more warm water than usual was moving north, which in turn cause rapid Arctic ice loss. Over the course of a few decades in the late 1300s and 1400s, vast amounts of ice were flushed out into the North Atlantic, which not only cooled the North Atlantic waters, but also diluted their saltiness, ultimately causing AMOC to collapse. It is this collapse that then triggered a substantial cooling.

Fast-forward to our own time: between the 1960s and 1980s, we have also seen a rapid strengthening of AMOC, which has been linked with persistently high pressure in the atmosphere over Greenland. Lapointe and Bradley think the same atmospheric situation occurred just prior to the Little Ice Age -- but what could have set off that persistent high-pressure event in the 1380s?

The answer, Lapointe discovered, is to be found in trees. Once the researchers compared their findings to a new record of solar activity revealed by radiocarbon isotopes preserved in tree rings, they discovered that unusually high solar activity was recorded in the late 1300s. Such solar activity tends to lead to high atmospheric pressure over Greenland.

At the same time, fewer volcanic eruptions were happening on earth, which means that there was less ash in the air. A "cleaner" atmosphere meant that the planet was more responsive to changes in solar output. "Hence the effect of high solar activity on the atmospheric circulation in the North-Atlantic was particularly strong," said Lapointe.

Lapointe and Bradley have been wondering whether such an abrupt cooling event could happen again in our age of global climate change. They note that there is now much less Arctic sea ice due to global warming, so an event like that in the early 1400s, involving sea ice transport, is unlikely. "However, we do have to keep an eye on the build-up of freshwater in the Beaufort Sea (north of Alaska) which has increased by 40% in the past two decades. Its export to the subpolar North Atlantic could have a strong impact on oceanic circulation," said Lapointe. "Also, persistent periods of high pressure over Greenland in summer have been much more frequent over the past decade and are linked with record-breaking ice melt. Climate models do not capture these events reliably and so we may be underestimating future ice loss from the ice sheet, with more freshwater entering the North Atlantic, potentially leading to a weakening or collapse of the AMOC." The authors conclude that there is an urgent need to address these uncertainties.

Sunday, January 02, 2022

A critique of the American Psychological Association’s transition from a science based to a politically based institution

 Here is Christopher Ferguson at Quillette.com. Christopher Ferguson is a professor of psychology at Stetson University in Florida.

CF is on target. What CF describes is a common phenomenon these days. It could be insanity. It could be cowardice. It could be power politics. But in no case is it science.

--------------------------------------

I’ve been a member of the American Psychological Association (APA) for years, and a fellow for the past six or seven years. I sat on their Council of Representatives, which theoretically sets policy for the APA, for three years. I am just ending my term as president of the APA’s Society for Media and Technology, where I have met many wonderful colleagues. Yet, at the end of 2021, I decided to resign my membership in the APA. My concern is that the APA no longer functions as an organization dedicated to science and good clinical practice. As a professional guild, perhaps it never did, but I believe it is now advancing causes that are actively harmful and I can no longer be a part of it.

I originally became engaged with the APA in a futile effort to “fix from within.” Much of this focused on the APA’s deeply misleading policy statements in my own area of research: violence in video games. The APA maintains a policy statement linking such games to aggression, despite over 200 scholars asking them to avoid making such statements, a reanalysis of the meta-study on which the policy was based finding it to be deeply flawed, and the APA’s own Society for Media and Technology asking them to retract it. Other policy statements related to research areas I’m familiar with such as spanking appear to be similarly flawed, overstating certainty of harmful effects.

In the clinical realm, the APA’s advice has similarly been questionable. A 2017 recommendation highlighted Cognitive Behavioral Therapy (CBT; in which I am myself primarily trained) as treatment of choice for Post-Traumatic Stress Disorder. It remains in effect despite several meta-analyses subsequently finding CBT has little benefit over other therapies. More controversial were practice guidelines for men and boys which drew deeply from feminist theories, dwelled on topics of patriarchy, intersectionality, and privilege, and arguably disparaged men and families from traditional backgrounds. This guideline is actively harmful to the degree it both misguides therapy in favor of an ideological worldview and likely discourages men and families from more traditional backgrounds from seeking therapy.

The ideological capture of the men and boys guideline in particular should have been a red flag of what was to follow: a complete capitulation to far-left ideology following the murder of George Floyd. That murder raised legitimate questions not only of criminal justice reform (of which I am a supporter) but also reignited simmering debates about race. Such conversations are understandably emotionally fraught and often ideological, with deep right-left divides on the topic. There’s a wide range of space between believing the US is still mired in Jim Crow and that it is a racial utopia, but it is often hard to guide conversation into that constructive middle ground, where nuanced and data-driven conversations can be difficult but productive. What we don’t need is our science organizations going all-in on one side of our polarized divide and stoking furor with hyperbolic statements. Unfortunately, that is exactly what the APA and other left-leaning organizations did.

In May 2020, the APA’s then-president (the position is largely honorary, rotating each year) Sandra L. Shullman, referred to the US experiencing a “racism pandemic.” The second word is basically a clichΓ© obviously borrowing the buzzword from the COVID19 era which had just hit the US two months earlier. Shullman, speaking officially for the APA, went on to say, “The deaths of innocent black people targeted specifically because of their race—often by police officers—are both deeply shocking and shockingly routine. If you’re black in America—and especially if you are a black male—it’s not safe to go birding in Central Park, to meet friends at a Philadelphia Starbucks, to pick up trash in front of your own home in Colorado or to go shopping almost anywhere.”

These are terrifying words. They’re also at best debatable, arguably simply untrue. According to the Washington Post’s database of police shootings, shootings of unarmed black citizens are rare. There were 18 in 2020, the year Shullman was writing, and only four as of the last week in 2021. The issue of policing and race is nuanced. As scholars such as John McWhorter and Wilfred Reilly have pointed out, more unarmed whites than blacks are killed by police every year (left out of much of this is how infrequently Asian citizens are shot compared to either whites or blacks). However, most news agencies ignore white victims of police violence, creating an availability heuristic, wherein the public assumes black victims of police violence are exponentially more numerous than they are, while white victims are underestimated. The APA should be aware of the availability heuristic; after all it’s a psychological concept, yet their language contributes to it.

Proportionally, black individuals are fatally shot by police more than whites (though, again, Asians less than either), but proportionally black individuals are also overrepresented in the perpetration of violent crime and in violence toward police. To clarify, I am convinced that the evidence suggests that class rather than race is actually the key variable we should be considering, whether we’re talking about perpetrators of crime, or victims of police brutality. Every victim of police brutality is one victim too many, whatever their ethnicity. But these are difficult, complex, and nuanced conversations to have, and we need steady hands to guide us.

Instead, the APA threw gasoline on the fire. The idea that black citizens can’t go outside without being shot by police is statistically untrue, but also inflames racial tensions and, ironically, creates anxiety in minority communities. Unfortunately, homicides and other violent crimes have soared in US cities since May 2020, often hitting low-income neighborhoods and including the deaths of multiple children of color, something the APA has been, to my knowledge, conspicuously silent on. My concern is that their rhetoric in race, by delegitimizing policing and promoting false narratives about race and policing, has made the APA unintentionally complicit in this phenomenon.

The APA has continued to double-down. This year they released an apology for systemic racism, declared its mission to combat systemic racism in the US and a policy dedicated to combating health inequities which it sees as the product of racism. All of these are filled with leftist jargon and assumptions from progressive worldviews and short on clear evidence or even definitions. Put simply, these are statements of leftist ideology, not science nor even good clinical practice.

As apologies go in our current Twitter-infused culture, the APA’s apology was promptly rejected by the Association of Black Psychologists (ABP). The ABP saw the APA apology as not far enough, and performative. I disagree with the ABP worldview of the modern US, but I do agree with them that the APA’s apology was probably performative. It fits well with my experience with the APA’s miscommunication of science not to mention their legacy of changing their ethics code to allow psychologists to participate in harsh interrogations of detainees at Guantanamo Bay, something that only came to light six years ago. Several psychologists later sued the APA for, effectively, throwing them under the bus in the whole affair which I don’t find to be mutually exclusive. But this situation seems an example of the Twitter-verse apology treadmill wherein capitulation on one point simply drives anger-mongers to push the goalposts further along or simply chums the waters of outrage with more blood. “We demand your apology” almost inevitably shifts to “Your apology wasn’t good enough.”

More recently, the APA announced a list of “inclusive language”, adding to the language policing that has become common in left spaces from journalism to the American Medical Association. “Mentally ill” is replaced with the clunky “person living with a mental health condition” and “prostitute” with “person who engages in sex work.” We’ll no longer have the elderly or seniors (“older adults” or “persons 65 years and older”). Just to make the “person with” format confusing, “person with deafness” is out (“deaf person”) as is “person with blindness” (“blind person”). Advocating color-blindness is out, as are caucasians (“White” or “European” is preferred). We’re not to talk about birth sex or people being born a boy or girl (“assigned female/male at birth” is the language of choice now). There are no more poor people just “people whose incomes are below the federal poverty threshold.” We’re not to use words like “pipeline” (“triggering” to Native Americans given controversies over fuel oil pipelines on Native lands), “spirit animal” (use “animal I would most like to be” which isn’t really the same thing) instead, or “tribe.” “Violent” language like “killing it” or “take a stab at it” is to be avoided. A lot of this is obvious safetyism, which I worry that, by treating people like they’re made of spun glass and incentivizing outrage and offense, will contribute to escalating mental health crises. But, as others have pointed out, it’s also elitist as most people couldn’t hope to keep up with the ever-changing language rules of the academic elite.

In fairness, the APA is hardly unique in its ostensible capture by wokeness. The British Psychological Society, in a statement uncritically quoting controversial “anti-racism” figure Ibram Kendi and speaking of Covid said, “It arrived in a society beset with systemic racism, inequity and oppression of minority and marginalised groups…” In 2021, a UK government report by a commission consisting mainly of scholars of color concluded that the evidence for systemic racism in the UK was lacking. In response, the BPS doubled down saying “We are particularly concerned that the re-traumatising of Black, Asian and Minority Ethnic people through a denial of their lived experience, will have an adverse psychological impact.” Yet, lived experience (e.g., anecdote) both varies widely within groups and is generally a poor source of information. We should certainly listen to people’s views and experiences, and these can guide research, but they shouldn’t trump data.

The BPS has turned its accusations onto itself as well. BPS Chief Executive Sarb Bajwa mused, “Are we institutionally racist? I think my answer would be that, if it feels like we are, then we probably are.” These kinds of public confessions swept leftist institutions in 2020, often without any clarity of what these statements meant or evidence to support them. They took on something of a quasi-religious revival like furor. It’s worth nothing that such statements don’t merely speak to historical racism, which would be fair to acknowledge, but explicitly state that some of society’s most progressive institutions remain institutionally racist to the present day.

In August 2020, the BPS publication the Psychologist, edited by Dr. Jon Sutton, published a letter by Dr. Kirsty Miller criticizing the BPS’s increased politicization and deviation from good scientific and clinical practice (the letter and exchange can be found on Dr. Miller’s website). The expected Twitter storm naturally ensued during which no one came out looking the better for it, but Dr. Sutton decided to retract Dr. Miller’s letter, a decision that certainly in my opinion is political censorship however it might otherwise be explained. The Psychologist subsequently published an issue that focused on systemic racism and presented only views in support of the concept. This is unfortunate, as I have always respected the Psychologist (and Dr. Sutton) particularly for its bravery in considering controversial topics and views. This is needed for any actual conversation on systemic racism. But like so many left institutions, rather than fostering a nuanced and complex conversation on a controversial topic, the Psychologist has eschewed this role in favor of promoting a single moralistic worldview and shaming those who disagree.

To be fair to the Psychologist, they did publish (to my knowledge) one subsequent critical letter by Dr. Lewis Mitchell who called for an evidence-based approach to these controversial questions. Dr. Sutton’s reply to Dr. Mitchell stated “…we have always been very open about our desire to see constructive, evidence-based, psychological conversation on these topics” but then pivoted to say “Of course we want scientific rigour. But at the same time, we are not seeking a debate over whether or not racism exists in our society. The evidence for that is all around us … And we will never invalidate personal experience by demanding ‘where’s your scientific evidence?’” This, of course, is a very strange argument to come from scientists and highlights the very anti-science nature of the current sociopolitical moment.

To be explicit, I worry that capitulation to the kind of wokeness that has permeated left-leaning institutions is akin to a kind of virus and actually tokenizes and harms historically marginalized communities, increases polarization and racial discord and obstructs data-driven progress on critical issues such as criminal justice reform and income inequality. What strikes me about all this is that these types of turmoil, whether in psychology, academia, journalism, even role-playing games are happening largely in elite, progressive spaces. Scholars such as Michael Lind and Batya Ungar-Sargon suggest that much of the current narrative on race (whether neoracist identitarianism from the Left or the xenophobia of the Right) is a proxy for class struggles, with elites in politics, business and academia using this narrative to divide working-class people of all ethnicities. One need only look at the APA’s decision, communicated via exchanges on a division leaders’ listserve, in June 2020, to eliminate approximately 50 lower-level staff positions, but without reducing executive-level pay. Interestingly, comparing their executive salaries from 2019 tax documents to draft 2020 tax documents provided to me by the APA treasurer, APA executives received significant raises in the same calendar year they let multiple lower-level employees go. For instance, APA CEO Arthur Evans made $821,000 in total compensation in 2020.

I’d argue the 2020 moment isn’t really about race or social justice, but about a defensive elite narrative projecting ostensible morality when, in reality, consolidating power. That our psychological institutions, as well as those elsewhere in academia, journalism, and business, have participated in this is a shame on our field.

The mismanagement of COVID-19 therapeutics by Government, the Media, and Tech – Part 1

 A worthwhile interview of Dr. Peter McCullough, and expert in the COVID-19 saga.

Here is the link to the video.

Here is a link to one of his papers: "Multifaceted highly targeted sequential multidrug treatment of early

ambulatory high-risk SARS-CoV-2 infection (COVID-19)"

Here are the Abstract and Summary.
-------------------------------------------
Abstract

The SARS-CoV-2 virus spreading across the world has
led to surges of COVID-19 illness, hospitalizations, and
death. The complex and multifaceted pathophysiology of
life-threatening COVID-19 illness including viral mediated
organ damage, cytokine storm, and thrombosis warrants
early interventions to address all components of the devastating
illness. In countries where therapeutic nihilism is
prevalent, patients endure escalating symptoms and without
early treatment can succumb to delayed in-hospital
care and death. Prompt early initiation of sequenced multidrug
therapy (SMDT) is a widely and currently available
solution to stem the tide of hospitalizations and death. A
multipronged therapeutic approach includes 1) adjuvant
nutraceuticals, 2) combination intracellular anti-infective
therapy, 3) inhaled/oral corticosteroids, 4) antiplatelet
agents/anticoagulants, 5) supportive care including supplemental
oxygen, monitoring, and telemedicine. Randomized
trials of individual, novel oral therapies have not
delivered tools for physicians to combat the pandemic in
practice. No single therapeutic option thus far has been
entirely effective and therefore a combination is required
at this time. An urgent immediate pivot from single drug to
SMDT regimens should be employed as a critical strategy
to deal with the large numbers of acute COVID-19 patients
with the aim of reducing the intensity and duration
of symptoms and avoiding hospitalization and death.

Summary

The SARS-CoV-2 outbreak is a once in a hundred-year pandemic
that has not been addressed by rapid establishment of infrastructure
amenable to support the conduct of large, randomized
trials in outpatients in the community setting. The early flu-like
stage of viral replication provides a therapeutic window of tremendous
opportunity to potentially reduce the risk of more severe sequelae
in high risk patients. Precious time is squandered with a
"wait and see" approach in which there is no anti-viral treatment
as the condition worsens, possibly resulting in unnecessary hospitalization,
morbidity, and death. Once infected, the only means of
preventing a hospitalization in a high-risk patient is to apply treatment
before arrival of symptoms that prompt paramedic calls or
emergency room visits. Given the current failure of government
support for randomized clinical trials evaluating widely available,
generic, inexpensive therapeutics, and the lack of instructive outpatient
treatment guidelines (U.S., Canada, U.K., Western EU,
Australia, some South American Countries), clinicians must act
according to clinical judgement and in shared decision making
with fully informed patients. Early SMDT developed empirically
based upon pathophysiology and evidence from randomized data
and the treated natural history of COVID-19 has demonstrated
safety and efficacy. In newly diagnosed, high-risk, symptomatic
patients with COVID-19, SMDT has a reasonable chance of therapeutic
gain with an acceptable benefit-to-risk profile. Until the
pandemic closes with population-level herd immunity potentially
augmented with vaccination, early ambulatory SMDT should be
a standard practice in high risk and severely symptomatic acute
COVID-19 patients beginning at the onset of illness.