Sunday, August 29, 2021

The Woke insanity continues

 From Jonathan Turley.

JT is on target.

The amusing aspect is that Brandeis is proposing new synonyms which are intended to communicate the same information, i.e., they mean the same thing.  Therefore the new words and phrases become, by definition, "oppressive" as well.

No question about it - the proponents of Brandeis's "trigger warning" warnings are nut cases.

----------------------------------

It is now common for universities to list offensive terms to be avoided by faculty and students, as we have previously discussed at schools like Michigan, James Madison, and Berkeley. Now, Brandeis has issued a list of “oppressive” words that include such expressions as “killing two birds with one stone” and “beating a dead horse.” However, the school did not issue a trigger warning because “trigger warning” is now on the list as . . . well . . . triggering.

We previously discussed Brandeis concern over “trigger warning” warnings and a dean’s controversial declaration that “Yes, all White people are racists.” However, the new list contains further examples of oppressive language and the suggested substitutes. Some have balked at the suggested changes.

Notably, an Iowa State Professor recently countered those questioning the value of trigger warnings and insisted that they should in fact be expanded. Some have cited a Harvard study that undermined claims in support of trigger warnings.

In addition to “trigger warning,” other violent terminology is listed, including “killing it,” “whipped into shape,” and “take a shot at it.”

Rather than use expressions like “killing two birds with one stone,” the school’s Prevention, Advocacy and Resource Center (PARC) suggests “feeding two birds with one seed.”

“Culturally appropriative” terms include any references to “tribe” to mean one’s group or identification.

Some of the substitutes seem pretty subtle. For example, the “person first/identity first list” includes terms like “homeless person.” However, the suggested alternative is “person without housing.”

Rather than saying “mentally ill person,” you are asked to say “Person living with a mental health condition.”

Rather than saying “prostitute,” you must say “Person who engages in sex work.”

Rather than saying “slave,” you must say “Person who is/was enslaved.”

For “Identity-based” terms, you are asked to say “bananas” rather than “wild” or “crazy.”

Moreover, referring to “people of color” is deemed oppressive if you are primarily referring to a particular group.

Expressions like “no can do” and “long time no see” are deemed oppressive.

PARC is promising a more expansive list in the future.

Tuesday, August 24, 2021

As US Schools Prioritize Diversity Over Merit, China Is Becoming the World’s STEM Leader

Percy Deift, Svetlana Jitomirskaya, and Sergiu Klainerman explain how the Left is destroying US leadership in mathematics and science, at quillette.com.

It goes much further than mathematics and science.  The implications of the policies described here apply to all professions and trades.  A policy of equality of outcomes will hurt our economy, our standard of living, and our ability to compete.

----------------------------------------

All three of us are mathematicians who came to the United States as young immigrants, having been attracted by the unmatched quality and openness of American universities. We came, as many others before and after, with nothing more than a good education and a strong desire to succeed. As David Hilbert famously said, “Mathematics knows no races or geographic boundaries; for mathematics, the cultural world is one country.” Having built our careers in US academia, we are proud to call ourselves American mathematicians.

The United States has been dominant in the mathematical sciences since the mass exodus of European scientists in the 1930s. Because mathematics is the basis of science—as well as virtually all major technological advances, including scientific computing, climate modelling, artificial intelligence, cybersecurity, and robotics—US leadership in math has supplied our country with an enormous strategic advantage. But for various reasons, three of which we set out below, the United States is now at risk of losing that dominant position.

First, and most obvious, is the deplorable state of our K-12 math education system. Far too few American public-school children are prepared for careers in science, technology, engineering, and mathematics (STEM). This leaves us increasingly dependent on a constant inflow of foreign talent, especially from mainland China, Taiwan, South Korea, and India. In a 2015 survey conducted by the Council of Graduate Schools and the Graduate Record Examinations Board, about 55 percent of all participating graduate students in mathematics, computer sciences, and engineering at US schools were found to be foreign nationals. In 2017, the National Foundation for American Policy estimated that international students accounted for 81 percent of full-time graduate students in electrical engineering at U.S. universities; and 79 percent of full-time graduate students in computer science.

That report also concluded that many programs in these fields couldn’t even be maintained without international students. In our field, mathematics, we find that at most top departments in the United States, at least two-thirds of the faculty are foreign born. (And even among those faculty born in the United States, a large portion are first-generation Americans.) Similar patterns may be observed in other STEM disciplines.

The second reason for concern is that the nationwide effort to reduce racial disparities, however well-intentioned, has had the unfortunate effect of weakening the connection between merit and scholastic admission. It also has served (sometimes indirectly) to discriminate against certain groups—mainly Asian Americans. The social-justice rhetoric used to justify these diversity, equity, and inclusion (DEI) programs is often completely at odds with the reality one observes on campuses. The concept of fighting “white supremacy,” in particular, doesn’t apply to the math field, since American-born scholars of all races now collectively represent a small (and diminishing) minority of the country’s academic STEM specialists.

Third, other countries are now competing aggressively with the United States to recruit top talent, using the same policies that worked well for us in the past. Most notably, China, America’s main economic and strategic competitor, is in the midst of an extraordinary, mostly successful, effort to improve its universities and research institutions. As a result, it is now able to retain some of the best Chinese scientists and engineers, as well as attract elite recruits from the United States, Europe, and beyond.

In a 2018 report published by the Organization for Economic Cooperation and Development (OECD), China ranked first in mathematical proficiency among 15-year-olds, while the United States was in 25th place. And a recent large-scale study of adults’ cognitive abilities, conducted by the National Center for Education Statistics, found that many Americans lack the basic skills in math and reading required for successful participation in the economy. This poor performance can’t be explained by budgetary factors: When it comes to education spending per pupil, the United States ranks fifth among 37 developed OECD nations.

* * *

There are numerous underlying factors that help explain these failures—including some that, as mathematicians, we feel competent to address. One obvious problem lies in the way teachers are trained. The vast majority of K-12 math teachers in the United States are graduates of programs that teach little in the way of substantive mathematics beyond so-called math methods courses (which focus on such topics as “understanding the complexities of diverse, multiple-ability classrooms”). This has been true for some time. But the trend has become more noticeable in recent years, as curricula increasingly shift from actual mathematics knowledge to courses about social justice and identity politics.

At the same time, math majors—who can arrive in the classroom pre-equipped with substantive mathematics knowledge—must go through the process of teacher certification before they can teach math in most public schools, a costly and time-consuming prerequisite. The policy justification for this is that all teachers need pedagogical training to perform effectively. But to our knowledge, this claim isn’t supported by the experience of other advanced countries. Moreover, in those US schools where certification isn’t required, such as in many charter and private schools, math majors and PhDs are in great demand, and the quality of math instruction they provide is often superior.

Even if some pedagogical training is desirable, particularly for elementary-school teachers, it is easier for a math specialist to pick up teaching skills on the job than it is for a trained teacher to acquire fundamental math knowledge. Based on our own experience, the best high school teachers are typically those who have solid mathematics backgrounds and enjoy teaching math.

An even bigger problem, in our view, is that the educational establishment has an almost complete lock on the content taught in our schools, with little input from the university math community. This unusual feature of American policymaking has led to a constant stream of ill-advised and dumbed-down “reforms,” which have served to degrade the teaching of mathematics to such an extent that it has become difficult to distinguish a student who is capable from one who is not.

Those who find that last assertion difficult to accept should peruse the revised Mathematics Framework proposed by California’s Department of Education. If implemented, the California framework would do away with any tracking or differentiation of students up to the 11th grade. In order to achieve what the authors call “equity” in math education, the framework would effectively close the main pathway to calculus in high school to all students except those who take extra math outside school—which, in practice, means students from families that can afford enrichment programs (or those going to charter and private schools). California is just one state, of course. But as has been widely noted, when it comes to policymaking, what happens in California today often will come to other states tomorrow.

The framework proposed for California’s 10,588 public schools and their six-million-plus students promotes “data science” as a preferred pathway, touting it as the mathematics of the 21st century. While this might sound like a promising idea, the actual “data-science” pathway described in the framework minimizes algebraic training to such an extent that it leaves students completely unprepared for most STEM undergraduate degrees. Algebra is essential to modern mathematics; and there is hardly any application of mathematics (including real data science) that is not based to a large extent on either algebra or calculus (with the latter being impossible to explain or implement without the former).

The authors write that “a fundamental aim of this framework is to respond to issues of inequity in mathematics learning”; that “we reject ideas of natural gifts and talents [and the] cult of the genius”; and that “active efforts in mathematics teaching are required in order to counter the cultural forces that have led to and continue to perpetuate current inequities.” And yet the research they cite to justify these claims has been demonstrated to be shallow, misleadingly applied, vigorously disputed, or just plainly wrong. Even the specific model lessons offered in the proposed framework fail to withstand basic mathematical scrutiny, as they muddle basic logic, present problems that can’t be solved by techniques described as being available to students, or list solutions without discussing the need for a proof (thus developing a false understanding of what it means to “solve” a problem—a misconception that university educators such as ourselves must struggle to undo).

The low quality of public K-12 math education in the United States has affected all demographic groups. But it has had a particularly strong negative effect on non-immigrant blacks and Hispanics, as well as young women of all races. This has led to a disappointing level of representation for these groups in STEM disciplines, which in turn has provoked understandable concern. We applaud efforts to address this problem, insofar as they help remove remaining obstacles and prejudices, and encourage more women and underrepresented minorities to choose careers in mathematics and other STEM disciplines. Indeed, partly as a result of such steps, the representation of women in our profession has increased dramatically over the last 50 years.

But what started as a well-meaning and sometimes beneficial effort has, over time, transformed into a bureaucratic machine whose goal has gone well beyond fighting discrimination. The new goal is to eliminate disparities in representation by any means possible. This is why education officials in some school boards and cities—and even entire states, such as California and Virginia—are moving to scrap academic tracking and various K-12 gifted programs, which they deem “inequitable.” Operating on the same motivations, many universities are abandoning the use of standardized tests such as the SAT and GRE in admissions.

This trend, which reaches across many fields, is especially self-defeating in mathematics, because declining standards in K-12 math education are now feeding into a vicious cycle that threatens to affect all STEM disciplines. As already noted, low-quality K-12 public-school education produces students who exhibit sub-par math skills, with underprivileged minorities suffering the most. This in turn leads to large disparities in admissions at universities, graduate programs, faculty, and STEM industry positions. Those disparities are then, in turn, condemned as manifestations of systemic racism—which results in administrative measures aimed at lowering evaluation criteria. This lowering of standards leads to even worse outcomes and larger disparities, thus pushing the vicious cycle through another loop.

The short-term fix is a quota system. But when applied to any supposedly merit-based selection process, quotas are usually counterproductive. Various studies, which accord with our own experience in academia, show that placing talented students from underrepresented groups in math programs that are too advanced for their level of preparedness can lead to discouragement, and often even abandonment of the field. Typically, these students would be better served by slightly less competitive, more nurturing programs that accord with their objectively exhibited levels of performance.

Unfortunately, the trend is pointing in the opposite direction. In fact, at many of our leading academic and research institutions, including the National Academies of Sciences, the American Academy of Arts and Sciences, the National Science Foundation, and the National Institutes of Health, scientific excellence is being supplanted by diversity as the determining factor for eligibility in regard to prizes and other distinctions. And some universities, following the example of the University of California, are now implementing measures to evaluate candidates for faculty positions and promotions based not only on the quality of their research, teaching, and service, but also on their specifically articulated commitment to diversity metrics. Various institutions have even introduced pathways to tenure based on diversity activities alone. The potential damage such measures can bring to academic standards in STEM is immense. And the history of science is full of examples that show how performative adherence to a politically favored ideology, easily faked by opportunistic and mediocre scientists, can lead to the devaluation of entire academic fields.

* * *

Needless to say, China pursues none of the equity programs that are sweeping the United States. Quite the contrary: It is building on the kind of accelerated, explicitly merit-based programs, centered on gifted students, that are being repudiated by American educators. Having learned its lesson from the Cultural Revolution, when science and merit-based education were all but obliterated in favor of ideological indoctrination, China is pursuing a far-sighted, long-term strategy to create a world-leading corps of elite STEM experts. In some strategically important fields, such as quantum computing, the country is arguably already ahead of the United States.

As part of this effort, China is identifying and nurturing talented math students as early as middle school. At the university entrance level, China relies on a hierarchical, layered system based on a highly competitive, fairly administered, national exam. STEM disciplines are encouraged: According to the World Economic Forum, China has the highest number of STEM grads in the world—at least 4.7 million in 2016. (By comparison, the United States came in third at 569,000. And as noted previously, a large portion of these graduates are foreign nationals.) China also has vastly increased the quality of its top universities, with six now ranked among the best 100 in the world. Tsinghua and Peking (ranked 17th and 18th respectively) now narrowly outrank Columbia, Princeton, and Cornell. As visitors to these Chinese universities (including ourselves) can attest, the average math undergraduate is now performing at a much higher level than his or her counterpart at comparable US institutions.

One reason for this is the work of scientists such as Shing-Tung Yau, a prominent Harvard mathematician who has spent decades helping to build up research mathematics in China. A key feature of the selective and consequential undergraduate competitions he’s developed over the last 10 years is that students are encouraged to focus their studies precisely on the content they will need as research mathematicians. High placement in these competitions virtually guarantees a student a spot at a top graduate school, and the program thereby helps systematically attract talented people to mathematics.

More recently, another group of prominent mathematicians (including some based in the United States), acting with the help of the Alibaba technology conglomerate and the China Association for Science and Technology, have created a global undergraduate mathematics competition with similar features. High schoolers who excel in annual math olympiads also are fast-tracked into top university programs.

While China already produces almost twice as many STEM PhDs as the United States, its universities still lag their US counterparts with respect to the quality of their graduate education programs. This is why many talented Chinese scholars continue to enroll in US programs. But this talent flow will likely soon ebb, or even dry up completely, as Chinese universities are now actively attracting senior Chinese, US, and European scientists to their faculty. (And unlike their American institutional counterparts, they recruit on the merit principle, unhampered by ideologically dictated diversity mandates.) In some cases, we are seeing prominent mathematicians at good or even top US schools moving to Peking and Tsinghua Universities after long and successful US careers. Many of these scholars are Chinese, but some are not.

We do not wish to gloss over China’s status as an authoritarian country that exhibits little concern for personal freedoms. But acknowledging this fact only serves to emphasize the significance of the shift we are describing: The drawbacks of American education policies are so pronounced that US schools are now losing their ability to attract elite scholars despite the fact that the United States offers these academics a freer and more democratic environment.

Moreover, even America’s vaunted reputation as a welcoming land for immigrants has taken a hit thanks to the recent, highly-publicized wave of anti-Asian crimes—which, though small in scale, is scaring off some Chinese students and their parents. Of greater significance are the thinly disguised anti-Asian policies (masquerading as anti-racism mandates) that are implemented by top US schools as a means to exclude Asian students.

* * *

Reversing America’s slide in STEM education will require many policy changes, not all of which fall within our expertise as mathematicians and academics. But at the very least, we recommend that American education authorities prioritize the development of comprehensive STEM curricula, at both basic and advanced levels, and allow outstanding mathematicians and other scientists to assist public servants in their design. Highly successful precedents such as the BASIS Charter School Curriculum and the Math for America teacher-development program supply examples of how such curricula might be developed. This should be coupled with a nationwide effort to identify and develop students who exhibit exceptional math talent.

American policymakers must also address the misplaced priorities of the education schools that train teachers. At the very least, math majors should be allowed to teach without following a full slate of accreditation procedures. And people who teach middle and high-school math should themselves be required to receive rigorous instruction in that subject.

Schools in urban areas and inner-city neighborhoods should be improved by following the most promising models. Such programs demonstrate that children benefit if they are challenged by high standards and a nurturing environment. Ideally, schools should operate in a manner that allows them to avoid year-to-year dependence on the vagaries of local funding and bureaucratic mandates.

More broadly, American educators must return to a process of recruitment and promotion based on merit, at all levels of education and research—a step that will require a policy U-turn at the federal, state, and local levels (not to mention at universities, and at tech corporations that have sought to reinvent themselves as social-justice organizations). Instead of implementing divisive policies based on the premise of rooting out invisible forms of racism, or seeking to deconstruct the idea of merit in spurious ways, organizations should redirect their (by now substantial) DEI budgets toward more constructive goals, such as funding outreach programs, and even starting innovative new charter schools for underprivileged K-12 students. Elite private universities, in particular, are well positioned to direct portions of their huge endowments and vast professional expertise in this regard. By doing so, they could demonstrate that it’s possible to help minority students succeed without sacrificing excellence.

The proposals we are describing here may sound highly ambitious—not to mention being at cross-currents with today’s ideological climate. But we also believe there will soon be an opportunity for change, as the rapid rise of China in strategically important STEM fields may help shock the American policymaking community into action—much like the so-called Sputnik crisis of the late 1950s and early 1960s, when it was Russia’s soaring level of technical expertise that became a subject of public concern. Then, as now, the only path to global technological leadership was one based on a rigorous, merit-based approach to excellence in mathematics, science, and engineering.

Saturday, August 21, 2021

Commotion at the Firearms Technology Industry Services Branch of the Bureau of Alcohol, Tobacco, Firearms and Explosives

 John Crump at AmmoLand.com.

----------------------------------------------

An anonymous source within the Bureau of Alcohol, Tobacco Firearms and Explosives (ATF) Firearms Technology Industry Services Branch (FTISB) reports turmoil within the department over a promotion.

The head of the FTISB, Michael Curtis, is retiring from the Bureau, and Daniel Hoffman has been selected as his replacement by Chief of Firearms Ammunition Technology Division (FATD), Eral Griffith. Hoffman has been a Firearms Examination Officer (FEO) for the past four years. Before coming to the FTISB, Hoffman did not have any experience within the firearms industry. This choice led many others inside FTISB to wonder how he got the nod over other more qualified candidates. The choice has caused an uproar within the department.

Curtis and Hoffman have been two of the ATF’s crusaders against pistol-stabilizing devices. They have long held the opinion that the devices are a workaround for gun owners to legally violate the National Firearms Act (NFA) restrictions on short-barreled rifles (SBRs). Both have long argued that certain pistol braces were designed to be shouldering devices.

Not only does Hoffman have minimal industry experience, but he has also faced complaints filed by other ATF employees from the FTISB. It has been suggested by sources that Hoffman has an unstable and explosive temper. The complaints seem to at least partially stem from Hoffman’s suggested bad nature. Several employees have referred to Hoffman as a toxic personality.
This choice harkens back to the nomination of David Chipman as Biden’s pick to head up the ATF.

Chipman once accused black applicants of cheating on a test because he felt they could not possibly score as high as they did on the exam. Chipman faced multiple EEOC complaints about his racist and sexist remarks, and yet President Biden still thought he was the best option to be the Director of the ATF.

The push back against Hoffman by revolting ATF employees seems to be causing the leadership within the ATF to second guess their decision to tap Hoffman as Curtis’s replacement. AmmoLand News’s sources report that Hoffman is on the verge of being unselected by Griffith as Curtis’s replacement as the head of FTISB. One thing is clear Hoffman is a very unpopular choice within the chaotic department.

The reason that the ATF’s top brass chose Hoffman to replace Curtis is perplexing. Curtis is in a top leadership position inside the ATF, but Hoffman is a low-level employee with little experience and a history of employee complaints. The choice doesn’t make sense to people in the know, leading many to wonder exactly how Griffith concluded that Hoffman should replace Michael Curtis.

Gun Owners of America (GOA) has also taken notice of the strange, proposed appointment of Hoffman to head the FTISB. The gun-rights group will be filing a Freedom of Information Act (FOIA)(below) request for all of Hoffman’s examinations. The group hopes to shed light on why the ATF would select any potentially problematic employee to lead one of the most powerful departments within the ATF.

The ATF has not responded to AmmoLand News’s request for comment about the appointment at the time of publication. As of now, Hoffman is still Griffith’s choice to replace Curtis.

Wednesday, August 18, 2021

New Confirmation that Climate Models Overstate Atmospheric Warming

 Ross McKitrick at judithcurry.com

Here is the link.

Here are some excerpts.

---------------------------------------

Two new peer-reviewed papers from independent teams confirm that climate models overstate atmospheric warming and the problem has gotten worse over time, not better. The papers are Mitchell et al. (2020) “The vertical profile of recent tropical temperature trends: Persistent model biases in the context of internal variability” Environmental Research Letters, and McKitrick and Christy (2020) “Pervasive warming bias in CMIP6 tropospheric layers” Earth and Space Science. John and I didn’t know about the Mitchell team’s work until after their paper came out, and they likewise didn’t know about ours.

Mitchell et al. look at the surface, troposphere and stratosphere over the tropics (20N to 20S). John and I look at the tropical and global lower- and mid- troposphere. Both papers test large samples of the latest generation (“Coupled Model Intercomparison Project version 6” or CMIP6) climate models, i.e. the ones being used for the next IPCC report, and compare model outputs to post-1979 observations. John and I were able to examine 38 models while Mitchell et al. looked at 48 models. The sheer number makes one wonder why so many are needed, if the science is settled. Both papers looked at “hindcasts,” which are reconstructions of recent historical temperatures in response to observed greenhouse gas emissions and other changes (e.g. aerosols and solar forcing). Across the two papers it emerges that the models overshoot historical warming from the near-surface through the upper troposphere, in the tropics and globally.
-----
Mitchell et al. 2020

Overall their findings are:

  • “we find considerable warming biases in the CMIP6 modeled trends, and we show that these biases are linked to biases in surface temperature (these models simulate an unrealistically large global warming).”
  • “we note here for the record that from 1998 to 2014, the CMIP5 models warm, on average 4 to 5 times faster than the observations, and in one model the warming is 10 times larger than the observations.”
  • “Throughout the depth of the troposphere, not a single model realization overlaps all the observational estimates. However, there is some overlap between the RICH observations and the lowermost modelled trend, which corresponds to the NorCPM1 model.”
  • “Focusing on the CMIP6 models, we have confirmed the original findings of Mitchell et al. (2013): first, the modeled tropospheric trends are biased warm throughout the troposphere (and notably in the upper troposphere, around 200 hPa) and, second, that these biases can be linked to biases in surface warming. As such, we see no improvement between the CMIP5 and the CMIP6 models.” (Mitchell et al. 2020)
-----
Concluding remarks

I get it that modeling the climate is incredibly difficult, and no one faults the scientific community for finding it a tough problem to solve. But we are all living with the consequences of climate modelers stubbornly using generation after generation of models that exhibit too much surface and tropospheric warming, in addition to running grossly exaggerated forcing scenarios (e.g. RCP8.5). Back in 2005 in the first report of the then-new US Climate Change Science Program, Karl et al. pointed to the exaggerated warming in the tropical troposphere as a “potentially serious inconsistency.” But rather than fixing it since then, modelers have made it worse. Mitchell et al. note that in addition to the wrong warming trends themselves, the biases have broader implications because “atmospheric circulation trends depend on latitudinal temperature gradients.” In other words when the models get the tropical troposphere wrong, it drives potential errors in many other features of the model atmosphere. Even if the original problem was confined to excess warming in the tropical mid-troposphere, it has now expanded into a more pervasive warm bias throughout the global troposphere.

If the discrepancies in the troposphere were evenly split across models between excess warming and cooling we could chalk it up to noise and uncertainty. But that is not the case: it’s all excess warming. CMIP5 models warmed too much over the sea surface and too much in the tropical troposphere. Now the CMIP6 models warm too much throughout the global lower- and mid-troposphere. That’s bias, not uncertainty, and until the modeling community finds a way to fix it, the economics and policy making communities are justified in assuming future warming projections are overstated, potentially by a great deal depending on the model.

IPCC’s climate change methodology is problematic

 Ross McKittrick provides climate change perspective at judithcurry.com.

Climate Change may not be "settled science".

---------------------------------------------

One day after the IPCC released the AR6 I published a paper in Climate Dynamics showing that their “Optimal Fingerprinting” methodology on which they have long relied for attributing climate change to greenhouse gases is seriously flawed and its results are unreliable and largely meaningless. Some of the errors would be obvious to anyone trained in regression analysis, and the fact that they went unnoticed for 20 years despite the method being so heavily used does not reflect well on climatology as an empirical discipline.

My paper is a critique of “Checking for model consistency in optimal fingerprinting” by Myles Allen and Simon Tett, which was published in Climate Dynamics in 1999 and to which I refer as AT99. Their attribution methodology was instantly embraced and promoted by the IPCC in the 2001 Third Assessment Report (coincident with their embrace and promotion of the Mann hockey stick). The IPCC promotion continues today: see AR6 Section 3.2.1. It has been used in dozens and possibly hundreds of studies over the years. Wherever you begin in the Optimal Fingerprinting literature (example), all paths lead back to AT99, often via Allen and Stott (2003). So its errors and deficiencies matter acutely.

The abstract of my paper reads as follows:

“Allen and Tett (1999, herein AT99) introduced a Generalized Least Squares (GLS) regression methodology for decomposing patterns of climate change for attribution purposes and proposed the “Residual Consistency Test” (RCT) to check the GLS specification. Their methodology has been widely used and highly influential ever since, in part because subsequent authors have relied upon their claim that their GLS model satisfies the conditions of the Gauss-Markov (GM) Theorem, thereby yielding unbiased and efficient estimators. But AT99 stated the GM Theorem incorrectly, omitting a critical condition altogether, their GLS method cannot satisfy the GM conditions, and their variance estimator is inconsistent by construction. Additionally, they did not formally state the null hypothesis of the RCT nor identify which of the GM conditions it tests, nor did they prove its distribution and critical values, rendering it uninformative as a specification test. The continuing influence of AT99 two decades later means these issues should be corrected. I identify 6 conditions needing to be shown for the AT99 method to be valid.”

The Allen and Tett paper had merit as an attempt to make operational some ideas emerging from an engineering (signal processing) paradigm for the purpose of analyzing climate data. The errors they made come from being experts in one thing but not another, and the review process in both climate journals and IPCC reports is notorious for not involving people with relevant statistical expertise (despite the reliance on statistical methods). If someone trained in econometrics had refereed their paper 20 years ago the problems would have immediately been spotted, the methodology would have been heavily modified or abandoned and a lot of papers since then would probably never have been published (or would have, but with different conclusions—I suspect most would have failed to report “attribution”).

Optimal Fingerprinting

AT99 made a number of contributions. They took note of previous proposals for estimating the greenhouse “signal” in observed climate data and showed that they were equivalent to a statistical technique called Generalized Least Squares (GLS). They then argued that, by construction, their GLS model satisfies the Gauss-Markov (GM) conditions, which according to an important theorem in statistics means it yields unbiased and efficient parameter estimates. (“Unbiased” means the expected value of an estimator equals the true value. “Efficient” means all the available sample information is used, so the estimator has the minimum variance possible.) If an estimator satisfies the GM conditions, it is said to be “BLUE”—the Best (minimum variance) Linear Unbiased Estimator; or the best option out of the entire class of estimators that can be expressed as a linear function of the dependent variable. AT99 claimed that their estimator satisfies the GM conditions and therefore is BLUE, a claim repeated and relied upon subsequently by other authors in the field. They also introduced a “Residual Consistency” (RC) test which they said could be used to assess the validity of the fingerprinting regression model.

Unfortunately these claims are untrue. Their method is not a conventional GLS model. It does not, and cannot, satisfy the GM conditions and in particular it violates an important condition for unbiasedness. And rejection or non-rejection of the RC test tells us nothing about whether the results of an optimal fingerprinting regression are valid.

AT99 and the IPCC

AT99 was heavily promoted in the 2001 IPCC Third Assessment Report (TAR Chapter 12, Box 12.1, Section 12.4.3 and Appendix 12.1) and has been referenced in every IPCC Assessment Report since. TAR Appendix 12.1 was headlined “Optimal Detection is Regression” and began

The detection technique that has been used in most “optimal detection” studies performed to date has several equivalent representations (Hegerl and North, 1997; Zwiers, 1999). It has recently been recognised that it can be cast as a multiple regression problem with respect to generalised least squares (Allen and Tett, 1999; see also Hasselmann, 1993, 1997)

The growing level of confidence regarding attribution of climate change to GHG’s expressed by the IPCC and others over the past two decades rests principally on the many studies that employ the AT99 method, including the RC test. The methodology is still in wide use, albeit with a couple of minor changes that don’t address the flaws identified in my critique. (Total Least Squares or TLS, for instance, introduces new biases and problems which I analyze elsewhere; and regularization methods to obtain a matrix inverse do not fix the underlying theoretical flaws). There have been a small number of attribution papers using other methods, including ones which the TAR mentioned. “Temporal” or time series analyses have their own flaws which I will address separately (put briefly, regressing I(0) temperatures on I(1) forcings creates obvious problems of interpretation).

The Gauss-Markov (GM) Theorem

As with regression methods generally, everything in this discussion centres on the GM Theorem. There are two GM conditions that a regression model needs to satisfy to be BLUE. The first, called homoskedasticity, is that the error variances must be constant across the sample. The second, called conditional independence, is that the expected values of the error terms must be independent of the explanatory variables. If homoskedasticity fails, least squares coefficients will still be unbiased but their variance estimates will be biased. If conditional independence fails, least squares coefficients and their variances will be biased and inconsistent, and the regression model output is unreliable. (“Inconsistent” means the coefficient distribution does not converge on the right answer even as the sample size goes to infinite.)

I teach the GM theorem every year in introductory econometrics. (As an aside, that means I am aware of the ways I have oversimplified the presentation, but you can refer to the paper and its sources for the formal version). It comes up near the beginning of an introductory course in regression analysis. It is not an obscure or advanced concept, it is the foundation of regression modeling techniques. Much of econometrics consists of testing for and remedying violations of the GM conditions.

The AT99 Method

(It is not essential to understand this paragraph, but it helps for what follows.) Optimal Fingerprinting works by regressing observed climate data onto simulated analogues from climate models which are constructed to include or omit specific forcings. The regression coefficients thus provide the basis for causal inference regarding the forcing, and estimation of the magnitude of each factor’s influence. Authors prior to AT99 argued that failure of the homoskedasticity condition might thwart signal detection, so they proposed transforming the observations by premultiplying them by a matrix P which is constructed as the matrix root of the inverse of a “climate noise” matrix C, itself computed using the covariances from preindustrial control runs of climate models. But because C is not of full rank its inverse does not exist, so P can instead be computed using a Moore-Penrose pseudo inverse, selecting a rank which in practice is far smaller than the number of observations in the regression model itself.

The Main Error in AT99

AT99 asserted that the signal detection regression model applying the P matrix weights is homoscedastic by construction, therefore it satisfies the GM conditions, therefore its estimates are unbiased and efficient (BLUE). Even if their model yields homoscedastic errors (which is not guaranteed) their statement is obviously incorrect: they left out the conditional independence assumption. Neither AT99 nor—as far as I have seen—anyone in the climate detection field has ever mentioned the conditional independence assumption nor discussed how to test it nor the consequences should it fail.

And fail it does—routinely in regression modeling; and when it fails the results can be spectacularly wrong, including wrong signs and meaningless magnitudes. But you won’t know that unless you test for specific violations. In the first version of my paper (written in summer 2019) I criticized the AT99 derivation and then ran a suite of AT99-style optimal fingerprinting regressions using 9 different climate models and showed they routinely fail standard conditional independence tests. And when I implemented some standard remedies, the greenhouse gas signal was no longer detectable. I sent that draft to Allen and Tett in late summer 2019 and asked for their comments, which they undertook to provide. But hearing none after several months I submitted it to the Journal of Climate, requesting Allen and Tett be asked to review it. Tett provided a constructive (signed) review, as did two other anonymous reviewers, one of whom was clearly an econometrician (another might have been Allen but it was anonymous so I don’t know). After several rounds the paper was rejected. Although Tett and the econometrician supported publication the other reviewer and the editor did not like my proposed alternative methodology. But none of the reviewers disputed my critique of AT99’s handling of the GM theorem. So I carved that part out and sent it in winter 2021 to Climate Dynamics, which accepted it after 3 rounds of review.

Other Problems

In my paper I list five assumptions which are necessary for the AT99 model to yield BLUE coefficients, not all of which AT99 stated. All 5 fail by construction. I also list 6 conditions that need to be proven for the AT99 method to be valid. In the absence of such proofs there is no basis for claiming the results of the AT99 method are unbiased or consistent, and the results of the AT99 method (including use of the RC test) should not be considered reliable as regards the effect of GHG’s on the climate.

One point I make is that the assumption that an estimator of C provides a valid estimate of the error covariances means the AT99 method cannot be used to test a null hypothesis that greenhouse gases have no effect on the climate. Why not? Because an elementary principle of hypothesis testing is that the distribution of a test statistic under the assumption that the null hypothesis is true cannot be conditional on the null hypothesis being false. The use of a climate model to generate the homoscedasticity weights requires the researcher to assume the weights are a true representation of climate processes and dynamics. The climate model embeds the assumption that greenhouse gases have a significant climate impact. Or, equivalently, that natural processes alone cannot generate a large class of observed events in the climate, whereas greenhouse gases can. It is therefore not possible to use the climate model-generated weights to construct a test of the assumption that natural processes alone could generate the class of observed events in the climate.

Another less-obvious problem is the assumption that use of the Moore-Penrose pseudo inverse has no implications for claiming the result satisfies the GM conditions. But the reduction of rank of the resulting covariance matrix estimator means it is biased and inconsistent and the GM conditions automatically fail. As I explain in the paper, there is a simple and well-known alternative to using P matrix weights—use of White’s (1980) heteroskedasticity-consistent covariance matrix estimator, which has long been known to yield consistent variance estimates. It was already 20 years old and in use everywhere (other than climatology apparently) by the time of AT99, yet they opted instead for a method that is much harder to use and yields biased and inconsistent results.

The RC Test

AT99 claimed that a test statistic formed using the signal detection regression residuals and the C matrix from an independent climate model follows a centered chi-squared distribution, and if such a test score is small relative to the 95% chi-squared critical value, the model is validated. More specifically, the null hypothesis is not rejected.

But what is the null hypothesis? Astonishingly it was never written out mathematically in the paper. All AT99 provided was a vague group of statements about noise patterns, ending with a far-reaching claim that if the test doesn’t reject, “then we have no explicit reason to distrust uncertainty estimates based on our analysis.” As a result, researchers have treated the RC test as encompassing every possible specification error, including ones that have no rational connection to it, erroneously treating non-rejection as comprehensive validation of the signal detection regression model specification.

This is incomprehensible to me. If in 1999 someone had submitted a paper to even a low-rank economics journal proposing a specification test in the way that AT99 did, it would have been annihilated at review. They didn’t state the null hypothesis mathematically or list the assumptions necessary to prove its distribution (even asymptotically, let alone exactly), they provided no analysis of its power against alternatives nor did they state any alternative hypotheses in any form so readers have no idea what rejection or non-rejection implies. Specifically, they established no link between the RC test and the GM conditions. I provide in the paper a simple description of a case in which the AT99 model might be biased and inconsistent by construction, yet the RC test would never reject. And supposing that the RC test does reject, which GM condition therefore fails? Nothing in their paper explains that. It’s the only specification test used in the fingerprinting literature and it is utterly meaningless.

The Review Process

When I submitted my paper to CD I asked that Allen and Tett be given a chance to provide a reply which would be reviewed along with it. As far as I know this did not happen, instead my paper was reviewed in isolation. When I was notified of its acceptance in late July I sent them a copy with an offer to delay publication until they had a chance to prepare a response, if they wished to do so. I did not hear back from either of them so I proceeded to edit and approve the proofs. I then wrote them again, offering to delay further if they wanted to produce a reply. This time Tett wrote back with some supportive comments about my earlier paper and he encouraged me just to go ahead and publish my comment. I hope they will provide a response at some point, but in the meantime my critique has passed peer review and is unchallenged.

Guessing at Potential Objections

1. Yes but look at all the papers over the years that have successfully applied the AT99 method and detected a role for GHGs. Answer: the fact that a flawed methodology is used hundreds of times does not make the methodology reliable, it just means a lot of flawed results have been published. And the failure to spot the problems means that the people working in the signal detection/Optimal Fingerprinting literature aren’t well-trained in GLS methods. People have assumed, falsely, that the AT99 method yields “BLUE” – i.e. unbiased and efficient – estimates. Maybe some of the past results were correct. The problem is that the basis on which people said so is invalid, so no one knows.

2. Yes but people have used other methods that also detect a causal role for greenhouse gases. Answer: I know. But in past IPCC reports they have acknowledged those methods are weaker as regards proving causality, and they rely even more explicitly on the assumption that climate models are perfect. And the methods based on time series analysis have not adequately grappled with the problem of mismatched integration orders between forcings and observed temperatures. I have some new coauthored work on this in process.

3. Yes but this is just theoretical nitpicking, and I haven’t proven the previously-published results are false. Answer: What I have proven is that the basis for confidence in them is non-existent. AT99 correctly highlighted the importance of the GM theorem but messed up its application. In other work (which will appear in due course) I have found that common signal detection results, even in recent data sets, don’t survive remedying the failures of the GM conditions. If anyone thinks my arguments are mere nitpicking and believes the AT99 method is fundamentally sound, I have listed the six conditions needing to be proven to support such a claim. Good luck.

I am aware that AT99 was followed by Allen and Stott (2003) which proposed TLS for handling errors-in-variables. This doesn’t alleviate any of the problems I have raised herein. And in a separate paper I argue that TLS over-corrects, imparting an upward bias as well as causing severe inefficiency. I am presenting a paper at this year’s climate econometrics conference discussing these results.

Implications

The AR6 Summary paragraph A.1 upgrades IPCC confidence in attribution to “Unequivocal” and the press release boasts of “major advances in the science of attribution.” In reality, for the past 20 years, the climatology profession has been oblivious to the errors in AT99, and untroubled by the complete absence of specification testing in the subsequent fingerprinting literature. These problems mean there is no basis for treating past attribution results based on the AT99 method as robust or valid. The conclusions might by chance have been correct, or totally inaccurate; but without correcting the methodology and applying standard tests for failures of the GM conditions it is mere conjecture to say more than that.

Sunday, August 15, 2021

America is a failed experiment

 America is a failed experiment: so says Chanelle Wilson, an Assistant Professor of Education and Director of Africana Studies at Bryn Mawr College.

College education has become indoctrination all too often.

Freedom is no longer valued the way it once was.

The Constitution has been decimated, de facto.

Here is Jonathan Turley on Chanelle Wilson.

-----------------------------------------

Chanelle Wilson is an assistant professor of education and director of Africana Studies at the affluent all-women’s Bryn Mawr College. She is the author of a new book on teaching critical race theory and anti-race practices at universities. On a recent podcast, Wilson offered a glimpse into those teachings which include the “fact” that America is a failed experiment that has done nothing for any group other than white people.

Wilson was speaking on the “Refuse Fascism” podcast with host Samantha Goldman. Goldman was the organizer of the much covered “Handmaid’s Tale” protests during the Trump Administration. Goldman, who reportedly identifies as a communist, can be seen here organizing in the 2020 election.

Wilson’s extremist views have not been widely reported outside of conservative sites like College Fix. However, in the podcast, Wilson shares an unvarnished account of what she claims to be undeniable “facts” about this failed country and the goal of this work. She insists that the country was designed and has only worked for white people. She then added that it

“didn’t work for black people” and “Damn sure it didn’t work for indigenous people… It did not work for people of Mexican ancestry. It didn’t work for Asians, it didn’t work for Jewish people, it didn’t work for Japanese people. It didn’t work for Chinese people. So who is this country for? This country is only for white people.”

She added that people of color are told by white U.S. citizens: “Don’t ask for too much, be happy that you’re allowed to be here, don’t make a ruckus, don’t say anything, don’t have a brain, don’t learn, don’t do any of those things…that is the truth.”

That truth may come as a surprise to the many in these groups. It is bizarre to claim that Jewish people and Asian people for example did not find success in this country, but Wilson claims that as an undeniable fact. Likewise, many in the Latino and Black communities have found great success in this country despite the continuing struggle with poverty in all of our communities. There is no denying a wealth gap between racial groups, which continues to trouble many in our country. However, it is ridiculous to claim that groups have not found success in this country, which continues to draw millions to our shores as immigrants.

Wilson uses the claim of America as a “failed experiment” to call for people to “recognize that none of this works for your average ‘American citizen.’ It doesn’t. So why wouldn’t we try something new? This whole experiment is failed. Their experiment has failed.” That clearly appealed to Goldman who declared “I don’t think there can be redemption until there is no America.” It was a weird moment since Goldman was an organizer for Joe Biden in 2020–just six months ago.

All of this was part of any effort to get educators and others to buy and assign her new book “Building Courage, Confidence and Capacity in Learning and Teaching Through Student-Faculty Partnership.” The book seeks to further organization student and faculty classroom collaboration in advancing critical race studies.

Despite the false factual claims and extremist rhetoric, I would be the first to defend Wilson right to espouse such views (though I am not sure either she or Goldman would feel the same inclination to defend my right to speak). We have been discussing efforts to fire professors who voice dissenting views on various issues including an effort to oust a leading economist from the University of Chicago as well as a leading linguistics professor at Harvard and a literature professor at Penn. Sites like Lawyers, Guns, and Money feature writers like Colorado Law Professor Paul Campus who call for the firing of those with opposing views (including myself). Such campaigns have targeted teachers and students who contest the evidence of systemic racism in the use of lethal force by police or offer other opposing views in current debates over the pandemic, reparations, electoral fraud, or other issues.

The support for such diversity of thought is essential for higher education. However, that does not mean that we should not call out such inaccurate and extremist viewpoints. While figures like liberal theorist William Galston has called out critical race theory as challenging the very foundations of our country:

“one thing is clear: Because the Declaration of Independence—the founding document of the American liberal order—is a product of Enlightenment rationalism, a doctrine that rejects the Enlightenment tacitly requires deconstructing the American order and rebuilding it on an entirely different foundation.”

With a gun to your head: The Larry Goldstein Incident

 

By Massad Ayoob at the American Handgunner.

The American Handgunner is worth subscribing to.  Massad Ayoob is a great source of information about guns and tactics.

--------------------------------------------

May 15, 2015. In a suburb of Jackson, Miss., Larry Goldstein, MD, is in his open garage, loading his pickup truck. A successful gynecologist, his sleepless years in residency and dedication to his long-standing practice have rewarded him with a large, expensive home. Unfortunately, criminals are drawn to signs of money.

 His personal sport for the last five years has been competitive shooting. He is on his way to a USPSA match. He has just put his gear bag in the back seat of the quad cab. In it are two CZ Shadow 9mm pistols, several magazines of 9mm, and enough ammo for the whole match. He hears a noise sounding like a squirrel on the eaves, and suddenly he is confronted by two strange men wearing bandannas over their faces.

 The nearest, a broad-chested guy about 5' 10", shoves a long barreled stainless-steel revolver in his face. Larry makes it for a .357 Magnum. He can see the noses of the live cartridges in the front of the cylinder. The man snarls, “You know what this is?” Larry replies as calmly as he can, “It’s a gun.” Predictably, the next words from the man are, “We want your money!”

 Larry’s gun safe is visible in the garage. The guy with the revolver spins him around, grabs him by the shirt, and forces him toward the safe with the gun’s muzzle at the back of his head. He orders him to open it. Larry’s shaky hands don’t get the combination right at first, and he tries to explain. “You’re lyin’! We’re gonna shoot you!” He finally manages to get the combination right. The intruders start grabbing stuff. There are at least two AR15s in the safe, including a .223, but the one they grab is a Smith & Wesson M&P chambered for .22 LR. They grab a handful of AR magazines and a few handguns and stuff them into a backpack.

 They march him into the house, his hands behind his head and the revolver still at the nape of his neck. Larry does not have a gun on his person. He has earned black belts earlier in his life in Hapkido and Tae Kwon Do. He knows enough to realize a disarming attempt on one of the men will leave him vulnerable to the other. He bides his time. They walk through the bedroom, past Larry’s sleeping wife. He cannot find his billfold — it will later turn up in a pair of pants he was wearing the day before — and the robbers satisfy themselves by pulling all the money from his wife’s purse. Without waking Mrs. Goldstein, they march Larry out of the house. Their plan is to make him drive them to an ATM and empty his account.

 At the pickup, the second man tries to load the AR15 and realizes he can’t fit a .223 magazine into a .22 LR. They make him get a magazine fitting the rifle. Both robbers get into the back seat, the revolver still aimed at the base of his skull, and order him to take them to the bank’s drive-up ATM.

 Going Mobile

 Dr. Goldstein experiences a “water, water everywhere and not a drop to drink” moment. Like many armed citizens (and off-duty cops) he has presumed staging guns reasonably close in the home or vehicle will be adequate. In the console is a Walther PPK .380. In the driver’s door pocket of his pickup are a Ruger LCP .380 and a GLOCK 19. All are loaded.

 He assesses his odds if he reaches for one as he drives. Both robbers are in the back seat, the smaller man (about 5' 7", 150 lbs.) has put a full magazine into the .22 caliber AR, and Larry has to presume him to be armed even though he hasn’t spotted a weapon of the suspect’s own yet.

 The other, directly behind Larry in the rear passenger seat, has the decidedly loaded revolver he’s kept pointed at Larry’s head. The bandits have the case between them containing two CZ 9mms and mags and ammo. At the wheel, he can’t see them both at the same time in the rear-view mirror. If he conspicuously turns around to look at them, it will tip them off and put them on alert.

 Either of them will be able to clearly see if he reaches for the Walther, so the console gun is out. He might be able to slip one of the pistols out of the door pocket with his non-dominant left hand but shooting backward over his shoulder will be awkward and difficult, and he’ll be unlikely to be able to neutralize both before one of them can kill him. The logical strategy still seems to be, “Bide your time.”

 At The ATM

 The robbers ask him how much money he has in his ATM card account. Larry answers truthfully, “About $12,000.” They yell at him, “You’re lying! You live in that big house! You’ve got to have more money!”

 They pull up to the ATM. The security camera will be able to identify only Larry. The masked men in the back seat are largely shielded by the truck’s tinted windows; perhaps they had this in mind when they chose to seat themselves where they did. Larry believes he can only withdraw a thousand dollars per day and tells them so. The refrain comes again, “You’re lyin’! We’re gonna shoot ya! We want it all!” Larry answers as calmly as he can, “We can’t get it all.” They tell him to try for $1,500.

 It takes Larry a while to punch in the numbers. The machine won’t give him $1,500. He tries for $500, gets it, then gets another $500. He tries a third time but hits the wrong buttons, and the machine only gives him $20. Apparently fixated on the stated amount, the robber with the revolver tells him, “Get $480!” He does. They’re satisfied. They tell him to drive.

 As the truck is rolling, the larger criminal tells him, “Okay, we’re gonna take your truck. Go to the woods behind your house … We’re going back to your house and get your wife.” He adds, “We’re gonna put you in the trunk.” Larry has already complied with their order to give him the opening code to their gate, and he knows he has had to leave the house unlocked.

 As a medical doctor Larry Goldstein has spent his career diagnosing. The diagnosis of this particular problem is excruciatingly clear. Drive to the woods. Go to the house and get your wife. We’re gonna put you in the trunk of a pickup truck that has no trunk.

 He realizes they’re going to murder him, go back to the house, and probably murder his unsuspecting wife.

 The stakes of the game have just gone up and Larry Goldstein knows there is only one card left to play.

 Steering with his right hand, he unobtrusively reaches down with his left, lifts the GLOCK 19 from the door pocket, and surreptitiously slips it under his left thigh.

 Last Resort

 They reach a spot in the woods behind Larry’s house. They order him to stop. He does so. They order him to get out of the car.

 As he opens the driver’s door, Larry lifts his left thigh enough to discreetly pick up the GLOCK with his right hand. As he alights on the ground, the man with the revolver opens the door behind Larry’s and prepares to step out, as his accomplice comes out of the right rear door.

 Larry Goldstein channels his five years of USPSA, sweeps the 9mm up rapidly into a two-handed stance, and opens fire.

 He’s shooting as fast as he can. He can see the gunman starting to fall backward, can see a window on the right side of the car shatter as one of his bullets passes through his antagonist and strikes the glass. The robber falls backward on the rear seat, his gun still in a hand that has fallen limply down.

 Larry turns toward the second threat. The other man is running away. Larry fires three shots at him, from about 30 yards. The masked man disappears from view.

 Larry turns his attention to the downed gunman. He sees the revolver is still in his hand, snatches it away, and puts it out of the man’s reach. He grabs the blood-soaked gunman and pulls his body out of the car. The experienced MD knows a dead man when he sees one. The gunman has been hit twice in the abdomen, twice in the chest and once in the head.

 The remaining thug is running in the direction of the Goldstein home. All Larry can think of is his wife’s safety. He jumps behind the wheel of his pickup and goes after him.

 In moments, Larry has eyes on him again. The perpetrator is getting into a tan SUV, apparently the getaway car, he has parked near the church close to Larry’s home. He starts it and begins to drive away. Larry aims his G19 and fires three rounds at the vehicle. It disappears from his view. It’s not heading toward his house; he lets it go.

 They’ve taken his iPhone. Larry Goldstein drives his truck to the nearest house, knocks on the door, and asks the lady who answers the door to call police. She hands him a phone. He first calls his wife, telling her to lock the doors. Then he calls 9-1-1 and gives a brief description of what has happened.

 The first act of the deadly play is over. The second now begins.

 Immediate Aftermath

 The scene not being exactly downtown, it took police 20 minutes to arrive. When he saw them coming, Dr. Goldstein unloaded the GLOCK, set it in the truck and stepped away from it. Patrol officers and detectives alike were professional and understanding.

 A crowd had formed. Having called his wife to reassure her, Larry phoned a friend he was supposed to pick up to go to the match with him. The friend called a mutual friend, an attorney, to meet them at police headquarters. One of the officers drove him there — in the front seat of the patrol car, un-cuffed. With legal counsel by his side, Larry told detectives what had happened. At one point the chief of police arrived. “How are you doing?” he asked Larry. “Not very good,” the doctor replied. “Don’t worry, you’re going to be all right,” the chief said.

 The chief had told him no lie.

 Long Term Aftermath

 Larry was never arrested, never sued, and never had to pay a penny in legal fees. His lawyer friend refused to bill him. He got his guns back in about a week. The escaped suspect was captured within a few days. He had used his personal vehicle as a getaway car and had taken it to an auto body shop to repair the bullet holes and shattered window caused by Larry’s gunfire. “I thought he’d be charged with felony murder,” Larry told American Handgunner later, “but the charges were kidnapping, armed robbery, and home invasion.” Legal proceedings dragged on, as they often do. “In December 2018,” says Larry, “he was convicted on all counts. His sentence added up to about 80 years. He’ll be eligible for parole in 40.”

 Dr. Goldstein got a new truck out of the deal. His had become evidence, necessarily stored with the windows up in an impound lot in Mississippi heat. The rear cabin was soaked with blood. Blood is tissue. Tissue rots. The insurance adjuster opened the door, gagged at the stench, and blurted, “It’s totaled.”

 Needless to say, the incident left an emotional mark. “The next morning when I woke up, it really dawned on me what could have happened, and I lost it,” he remembers. “I was a basket case for a while. Every time I thought about the incident, it really upset me. Later, I went to the family burial plot, and was overwhelmed at how close I had come to joining them.”

 One of the first things he did when he got home was to put a .45 caliber GLOCK 30 where he could reach it immediately. Was the dead man a gang-banger, with buddies who would seek revenge? He didn’t know, but he had to consider the possibility and provide for it. The hypervigilance remained for quite a while, and never entirely went away, even though no reprisals materialized.

 “I went to a psychologist, and studied up on post-traumatic stress disorder,” Larry comments. “I lost appetite. I had trouble sleeping. I did have a few dreams related to the incident.” Before long, he and his wife sold the house and moved. Friends and family were extremely supportive. So, he remembers, were the police and the prosecutor’s office.

 A competitive shooter, Larry had never felt a need to take a defense oriented class. This changed. His 25-minute ordeal sent him on a long odyssey of training, all the way to instructorship; in fact, he and I met when he took my MAG-40 class at the superb Boondocks training facility in Mississippi. He has found sharing with others the lessons of what he went through to be therapeutic.

 Lessons

 The doctor’s short-term hypervigilance settled into simply … vigilance. Larry feels the biggest lesson he learned was the importance of being alert and aware and avoiding complacency. He now carries a gun on his person almost all of his waking hours and is seldom far from one. “I don’t step out to pick up my newspaper or take out the trash without a gun on,” he says adamantly.

 If he had tried to fight earlier than he did, when the odds against him were all but hopeless, he would probably have been murdered and very likely his wife would have been, too. Larry was wise to give them reasons to keep him alive (getting money from the ATM on a weekend), to lull them into complacency with his compliance, and yet be ready to do what had to be done when the moment came.

 He feels his competition experience definitely helped him win the fight with the men who were almost certainly going to murder him. When the time came to shoot, he performed on auto pilot: two-handed, eye level, hits sufficiently fast, accurate and voluminous to keep a deadly opponent from pulling the trigger of the revolver in his hand. Larry had been shooting USPSA for five years when the incident took place. He has continued competition to this day.

 He’s glad he reached for the GLOCK instead of the seven- or eight-shot .380s also within reach. The G19 contained a GLOCK 17 magazine, for a total of eighteen 9mm rounds including the one in the chamber. It still had ammo on board after his three volleys of gunfire. The cartridges were match rounds, mild 147-gr. round nose FMJ handloads. His defense guns have modern defensive ammo in them today.

 When Larry tells his story, one of the first questions he gets is “Didn’t you get in trouble for shooting at the fleeing felon?” The answer is, he didn’t, and this bears some explanation. The man he shot at had committed, not just a felony, but a “heinous felony against the person”: kidnapping. They had given him every reason to believe they intended to murder him. They’d explicitly stated they were next going to get his wife, who was sleeping in an unlocked house while the felon had the combination to the security gate. Larry was without communications, and no other reasonable means of capture seemed feasible. He could not identify the suspect — the only description he could give was a masked African-American man of average size — and if he was not stopped he was likely to remain at large indefinitely.

 Finally, the cornerstone of the United States Supreme Court’s decision in Garner v. Tennessee was even police should only use deadly force on fleeing felons if their continued freedom constituted a clear and present danger to innocent human life. Larry had ample reason to consider this man armed and extremely dangerous. Remember, the thugs had told him they were going back to his house where they all knew Mrs. Goldstein was. While Garner was a civil case and involved police, it remains the defining statement on the mood of our highest court on the use of deadly force on fleeing felons. It is why I think, in this particular set of circumstances, Larry’s actions would have been defensible in court … and it’s probably why the investigating officers and the prosecutors had no problem with Larry’s final shots. Those last three shots, remember, were important factors in the ultimate capture of the surviving thug.

 A last important lesson is it’s BS to think “I live in a nice neighborhood, so I don’t need to keep a gun at hand.” Au contraire: Larry lived in a fine home in a very nice neighborhood and this was one reason he was targeted! We can’t overlook how many times they told him a guy with a house as big as his should have lots of money on hand. The nice neighborhoods are where the best stuff is to steal.

The truth about tear gas and stun grenades near Lafayette Park in 2020

Jonathan Turley at his blog.

Journalism is largely dead.  Tyranny is in view.

-------------------------------------------- 

We previously discussed the hypocrisy of the D.C. government and the media after D.C. Mayor Muriel Bowser admitted in court that it was the Metropolitan Police Department who used tear gas and stun grenades near the Lafayette Park in 2020. D.C. counsel also insisted that such use was entirely appropriate and sought to dismiss the lawsuit by the Black Lives Matter movement. The media effectively buried the story despite flogging a false narrative against former Attorney General Bill Barr for over a year in non-stop coverage. Barr was even denounced by members of my own faculty. Now, reporters are suing the city for attacking the media. Yet, there is no outcry in the media or from the left against Bowser and her government.

The American Civil Liberties Union has filed a lawsuit on behalf of two photojournalists who claim to have been injured by chemical irritants and stun grenades by Metropolitan Police Department officers during racial justice protests in August 2020.

One of the plaintiffs, freelance photojournalist Oyoma Asinor was arrested and claims that the MPD failed to return his cell phone, camera and goggles for nearly a year.

In the prior litigation, the city waited for a year to reveal the truth that it used tear gas near the park. A year earlier, Bowser condemned the federal government for its clearing of the area and alleged use of tear gas. Much of the media lionized Bowser for her stance at the time. The media also ignored the city’s own history of such abuses. She received national acclaim for painting “Black Lives Matter” on the street next to the park and renaming it “Black Lives Matter Plaza.”

One year later, Bowser kept the “BLM plaza” but opposed the BLM protesters. Her administration insisted in court that the protesters were legitimately teargassed by the metropolitan police to enforce her curfew that night.

After the park clearing, the media uniformly denounced then-Attorney General Bill Barr for ordering the park to be cleared so that President Trump could hold his controversial photo op in front of the St. John’s Church. The accounts in virtually every news report were quickly contradicted, but few reporters acknowledged the later facts coming out of federal agencies. As I noted in my testimony to Congress on the protest, the clearing of the park raised serious legal questions, particularly the unjustified use of force that night.

However, the repeated claim that Barr ordered the clearing of the area for the photo op was never supported and quickly contradicted. The plan to clear the park was set long before there was any discussion of the photo op, and it was based on the threat posed to the White House compound. Barr said he was unaware of any planned photo op when he approved the plan and that the delay in implementing it was due to the late arrival of needed personnel and fencing. Nevertheless, legal experts like University of Texas professor and CNN contributor Steve Vladeck continued to claim that Barr ordered federal officers “to forcibly clear protestors in Lafayette Park to achieve a photo op for Trump.” (Vladeck later offered a bizarre rationalization for his peddling the false account).

The false account was debunked by the Inspector General report. The BLM lawsuit against Barr and the federal government was later dismissed — again with relatively little recognition by the reporters and activists who flogged the false story for a year.

The city is being sued for precisely what Barr and others were accused of in literally hundreds of major articles for a year. Academics and reporters declared the tactics to be an assault on democracy and press freedom. Now, there is largely the familiar sound of crickets from a press corp that increasingly acts like a de facto state media.

Friday, August 13, 2021

The IPCC AR6 Hockey Stick

 Stephen McIntyre argues that the IPCC AR6 Hockey Stick rests on inappropriate data analysis.

Here is the link.

Worth a read to see how arbitrary and, possibly, inappropriate the global warming analysis can be.

Here are some excerpts.

-------------------------------

Although climate scientists keep telling that defects in their “hockey stick” proxy reconstructions don’t matter – that it doesn’t matter whether they use data upside down, that it doesn’t matter if they cherry pick individual series depending on whether they go up in the 20th century, that it doesn’t matter if they discard series that don’t go the “right” way (“hide the decline”), that it doesn’t matter if they used contaminated data or stripbark bristlecones, that such errors don’t matter because the hockey stick itself doesn’t matter – the IPCC remains addicted to hockey sticks: lo and behold, Figure 1a of its newly minted Summary for Policy-makers contains what else – a hockey stick diagram. If you thought Michael Mann’s hockey stick was bad, imagine a woke hockey stick by woke climate scientists. As the climate scientists say, it’s even worse that we thought.

Thursday, August 12, 2021

Dying to be Cool

Will Dabbs, MD, writes about an early experience in The American Handgunner.

---------------------------------------------

What exactly does it mean to be cool? Though difficult to define, you know it when you see it. Guns are cool. So was Steve McQueen. You get kind of a gestalt about such stuff.

 Some of us spend our entire lives striving mightily to be cool yet fail quite to get there. However, many’s the young man’s unscheduled trip across the river Styx ’twas precipitated by a poorly reasoned effort to be cool.

 The Perfect Day

 It was one of those torrid Mississippi summer afternoons when the sun burned like a furnace and the air was so humid you could rip off a chunk and gnaw it. School was out; I had not a care in the world.

 In my day you got your driver’s license at 15. I wouldn’t trust today’s 15-year-old males unsupervised with gum, much less an automobile. However, this was a different time.

 While I have indeed never been mistaken for cool, my dad did see to it I rolled in a cool car. A young man’s ride is so much more than transportation. It is style, personality, character and status all packaged up on four spinning wheels. My car was pure unfiltered awesome.

 The year was 1981 and the car was a 1970 Buick Skylark convertible. The sole ragtop in my small Mississippi Delta community, it was metallic blue and immensely, nay ludicrously, powerful. I would frequently go sit in the back seat and read science fiction tomes with the top down while parked in the driveway. As I said, being cool was more a journey than a destination with me.

 On this particular day I was sporting cheap, mirrored aviator shades while tearing down a preternaturally straight stretch of Lee Drive, so named for the esteemed General. Like all adolescent males I was young, bulletproof and immortal. Harm could never befall me.

 The Power Of Stupid

 Overcome by the moment, I pushed myself up such that I was sitting atop the headrest. A gangly, long-legged lad, I manipulated the accelerator with my right great toe and kept the wheel nominally managed with my fingertips. My face was fully in the slipstream above the windshield.

 Seatbelts were not the religious sacraments they are today, so mine were tucked down out of the way behind the seat so as not to interfere with my signature dynamic entry into the vehicle — vaulting over the door to land gracefully in the driver’s seat, ready to rock. During such a maneuver, one does not desire the painful inconvenience of seatbelt buckles. As a result, I perched atop my charging metallic blue steed, restrained not one whit.

 My nemesis lurked anonymously  within the tall Johnson grass that lined the rural road, happily munching his mid-afternoon snack. Whether driven by boredom, hunger, or love will never now be known, but he did for some reason then spontaneously take flight. Spreading his broad green wings, this massive 4" Delta grasshopper flexed his powerful legs and leapt into the ether.

 I perceived a scant flurry in the periphery of my vision and my entire world exploded. The gargantuan insect caught me squarely in the forehead and detonated like an antitank grenade, knocking me bodily back into the rear seat and leaving my legs draped limply astride the headrest. At this point my trusty Skylark was still making some 70 miles per hour, though now charging randomly sans pilot.

 I clawed violently back over the seat and dropped in behind the steering wheel again, seizing the appendage in an involuntary rictus. By some miracle throughout it all the car remained within the two white lines of its own accord. No doubt the vehicle was guided solely by my guardian angel, himself a both overworked and underappreciated spook.

 Denouement

 I carefully coasted to a stop on the side of the deserted road and took stock. My sunglasses were gone, never to be seen again. A not insubstantial gash tracked rakishly across my forehead, now most liberally adorned with splintered chunks of chitin and copious pureed pest. I wiped away the gore with an oily towel and puttered meekly back home.

 I crept stealthily into the house and retired to the bathroom to attend my wounds. My dad inquired concerning my injuries over dinner, and I not untruthfully explained I had been struck by a grasshopper while out driving with the top down. All involved thought it comical.

 The truth has remained suppressed to this very day, and now, my friends, I share it with you. 

Wednesday, August 11, 2021

Why The FDA Sucks

 Scott Alexander gets it mostly right at substack.com

It's all about incentives.

----------------------------------------------

Lots of people have been writing about aducanumab, but this Atlantic article in particular bothers me.

Backing up: aducanumab, aka Aduhelm, is a new “Alzheimers drug” recently approved by the FDA. I use the scare quotes because it’s pretty unclear whether it actually treats Alzheimers. It definitely treats beta-amyloid plaques, and beta-amyloid plaques are kind of nasty-looking brain structures that seem to be related to Alzheimers somehow. But we’re not sure exactly how they’re related, they might not be related in a way where removing them treats Alzheimers, and the best studies don’t find that the drug helps patients feel better or remember things more. Aducanumab doesn’t meet normal FDA standards for approval, but the FDA approved it anyway under one of their many “fast track” programs for promising drugs. This has been pretty roundly criticized, because although aducanumab might or might not work, it definitely costs $50,000/year/patient. Even if it worked great, that would be a hard pill to swallow (no pun intended, Aduheim is an IV infusion), but it’s especially galling since it might not work at all. Doctors will probably prescribe it despite its questionable value, and someone will end up paying the extraordinary price tag.

(Who? Nobody knows. The patient? Insurance companies? Taxpayers? Unrelated patients at the same hospital? Could be anyone! The whole point of the US health insurance system is to make sure nobody ever figures out who bears any particular cost, so that there's no constituency for keeping prices low. If you check your bank account one day and find it's down $50,000 for no reason, I guess you were the guy who ended up on the hook for this one. Sorry!)

Given that the FDA fast-track approved a sketchy drug which probably doesn’t work, it’s fair to wonder if their standards have gotten too lax - or at least if they should stop fast-tracking things. The Atlantic article dutifully makes this case via a somewhat labored global warming metaphor. The FDA’s eroding standards are like "the eroding coastlines and thawing icebergs associated with climate change". There are proposals to make drug approvals harder again, "just as there are proposals for encouraging reductions in carbon emissions", but "as with cap and trade policies for carbon emissions, aggressive approaches have failed in the face of powerful stakeholders". Some doctors are trying to fight back, but "just as switching to an electric car or turning your lights off won't cool a warming planet, a minority of idealistic doctors won't stem the flood of ineffective treatments". The message is clear (if a little heavy-handed): just as good thoughtful people want to end climate change and only greedy polluters oppose this, so good thoughtful people want to make the FDA stricter, and only greedy pharma companies could possibly complain.

While I acknowledge that aducanumab probably sucks, I think the Atlantic article and its global warming metaphor are totally off base. Nobody in the “FDA is too strict” camp has written a rebuttal yet, so I want to try my hand at this.

The FDA Is Still Much Too Strict

The Atlantic article says that “The FDA’s standards began to slide in the late 1980s and early ’90s” with the fast-track approval of AIDS drugs:

A new program in 1992 allowed for “accelerated approval” on the basis of surrogate markers, which are indirect measures of a drug’s benefit, assessed via laboratory or imaging tests, that stand in for more meaningful outcomes such as life expectancy. But the implementation of these accelerated processes was criticized by some scientists and patients, even at the time. In 1994, for example, The New York Times cited skeptics who worried that “no one can tell if the drugs work.” Eight months later, the AIDS activist organization ACT UP San Francisco called Anthony Fauci a “pill-pushing pimp” for supporting CD4 immune-cell counts and viral loads as surrogate markers. They were completely invalid, the activists wrote, and nothing more than “a marketing exec’s wet dream.”

The article acknowledges that the AIDS drugs actually worked out great - they in fact cured AIDS effectively and saved lots of lives. “But”, it concludes, "that level of success is not at all the norm."

I agree. AIDS drugs are abnormally successful, saving tens of thousands of lives per year. If the FDA's expedited review process moved them forward by even a few years, it probably averted a hundred thousand AIDS deaths. True, not every drug they accelerate does this. But some do.

Here’s another good example: coronavirus vaccines. The FDA still has not fully approved any coronavirus vaccine. The only reason you’re allowed to get vaccinated at all is because of a fast-track provisional approval somewhat like the one used for aducanumab. Coronavirus vaccines have probably also averted a few hundred thousand deaths.

So without wanting to say this level of success is “the norm” in the sense that every single fast-tracked drug achieves it, it’s not exactly vanishingly rare. It’s just something that happens sometimes and doesn’t happen sometimes. So how often do you have to save hundreds of thousands of lives before it’s worth the risk of occasionally also permitting a dud medication that “offers false hope”? How is this even a question?

That’s a kind of hand-wavey verbal argument. But doctors, epidemiologists, and economists have tried to formally confirm it with cost-benefit analyses on the last few decades of FDA-approved drugs. How many lives would have been saved if good drugs had been released a few years earlier, versus how many lives would have been lost by missing dangerous side effects? I think the current state of the art is something like Isakov, Lo, and Monterhozedjat , which finds that there are a tiny few disease categories where the FDA might be slightly too aggressive, but that overall the FDA is still much too conservative.

And these kinds of analyses, while good, can only count the drugs we know about. The real cost is the thousands of life-saving medications that are stillborn because nobody wants to go through the literally-one-billion-dollars-per-drug FDA approval process.

Ranting About The FDA For A Bunch Of Paragraphs

The Atlantic article doesn’t really include a cost-benefit analysis. But it does mention a couple of examples of times when lax FDA decisions seemed bad, for example when they “approved saline breast implants despite safety concerns”. I feel like this should give me the right to describe a couple of my least favorite FDA decisions, so we can see whether they’re more or less convincing than the breast implant thing.

The countries that got through COVID the best (eg South Korea and Taiwan) controlled it through test-and-trace. This allowed them to scrape by with minimal lockdown and almost no deaths. But it only worked because they started testing and tracing really quickly - almost the moment they learned that the coronavirus existed. Could the US have done equally well?

I think yes. A bunch of laboratories, universities, and health care groups came up with COVID tests before the virus was even in the US, and were 100% ready to deploy them. But when the US declared that the coronavirus was a “public health emergency”, the FDA announced that the emergency was so grave that they were banning all coronavirus testing, so that nobody could take advantage of the emergency to peddle shoddy tests. Perhaps you might feel like this is exactly the opposite of what you should do during an emergency? This is a sure sign that you will never work for the FDA.

The FDA supposedly had some plan in place to get non-shoddy coronavirus tests. For a while, this plan was “send your samples to the CDC in Atlanta, we’ll allow it if and only if they do it directly in their headquarters”. But the CDC headquarters wasn’t set up for large-scale testing, and the turnaround time to send samples to Atlanta meant that people had days to go around spreading the virus before results got back. After this proved inadequate, the FDA allowed various other things. They told labs that they would offer emergency approval for their kits - but placed such onerous requirements on getting the approval that almost no labs could achieve it (for example, you needed to prove you’d tested it against many different coronavirus samples, but it was so early in the pandemic that most people didn’t have access to that many). Then they approved a CDC kit which that the CDC could send to places other than their headquarters, but this kit contained a defective component and returned “positive” every time. The defective component was easy to replace, but if you used your own copy like a cowboy then the test wouldn’t be FDA-approved anymore and you could lose your license for administering it.

A group called the Association of Public Health Laboratories literally begged the FDA to be allowed to deploy the COVID tests they had sitting on the shelf ready for use. The head of the APHL went to the head of the FDA and begged him, in what they described as “an extraordinary and rare request”, to be allowed to test for the coronavirus. The FDA head just wrote back saying that “false diagnostic test results can lead to significant adverse public health consequences”.

So everyone sat on their defective FDA-approved coronavirus tests, and their excellent high-quality non-FDA approved coronavirus tests that they were banned from using, and didn’t test anyone for coronavirus. Meanwhile, American citizens who had recently visited Wuhan or other COVID hotspots started falling sick and asking their doctors or health departments whether they had COVID. Since the FDA had essentially banned testing, those departments told their citizens that they couldn’t help and they should just use their best judgment. Most of those people went out and interacted and spread the virus, and incidence started growing exponentially. By March 1, China was testing millions of people a week, South Korea had tested 65,000 people, and the USA had done a grand total of 459 coronavirus tests. The pandemic in these three countries went pretty much how you would expect based on those numbers.

There were so, so many chances to avert this. NYT did a great article on Dr. Helen Chu, a doctor in Seattle who was running a study on flu prevalence back in February 2020, when nobody thought the coronavirus was in the US. She realized that she could test her flu samples for coronavirus, did it, and sure enough discovered that COVID had reached the US. The FDA sprung into action, awarded her a medal for her initiative, and - haha, no, they shut her down because they hadn’t approved her lab for coronavirus testing. She was trying to hand them a test-and-trace program all ready to go on a silver platter, they shut her down, and we had no idea whether/how/where the coronavirus was spreading on the US West Coast for several more weeks.

Although the FDA did kill thousands of people by unnecessarily delaying COVID tests, at least it also killed thousands of people by unnecessarily delaying COVID vaccines. I’ll let you click on links for the details (1, 2, 3, 4, etc, etc, etc) except to remind you that they still have not officially granted full approval to a single COVID vaccine, and the only reason we can get these at all is through provisional approvals that they wouldn’t have granted without so much political pressure.

I worry that people are going to come away from this with some conclusion like “wow, the FDA seemed really unprepared to handle COVID.” No. It’s not that specific. Every single thing the FDA does is like this. Every single hour of every single day the FDA does things exactly this stupid and destructive, and the only reason you never hear about the others is because they’re about some disease with a name like Schmoe’s Syndrome and a few hundred cases nationwide instead of something big and media-worthy like coronavirus. I am a doctor and sometimes I have to deal with the Schmoe’s Syndromes of the world and every f@$king time there is some story about the FDA doing something exactly this awful and counterproductive. A while back I learned about cholestasis in infant Short Bowel Syndrome, a rare condition with only a few hundred cases nationwide. Babies cannot digest food effectively, but you can save their lives by using an IV line to direct nutrients directly into their veins. But you need to use the right nutrient fluid. The FDA approved one version of the nutrient fluid, but it caused some problems, especially liver damage. Drawing on European research, some scientists suggested that a version with fish oil would cause less liver damage - but the fish oil version wasn’t FDA-approved. A bunch of babies kept getting liver damage, and everyone knew how to stop it, but if anyone did the FDA would take away their licenses and shut them down. Around 2010, Boston Children’s Hospital found some loophole that let them add fish oil to their nutrient fluid on site, and infants with short bowel syndrome at that one hospital stopped getting liver damage, and the FDA grudgingly agreed to permit it but banned them from distributing their formulation or letting it cross state lines - so for a while if you wanted your baby to get decent treatment for this condition you had to have them spend their infancy in one specific hospital in Massachusetts. Around 2015 the FDA said that if your doctor applied for a special exemption, they would let you import the fish-oil nutritional fluid from Europe, but you were only able to apply after your baby was getting liver damage, and the FDA might just say no. Finally in 2018 the FDA got around to approving the corrected nutritional fluid and now babies with short bowel syndrome do fine, after twenty years of easily preventable state-mandated damage and death. And it’s not just this and coronavirus, I CANNOT STRESS ENOUGH HOW TYPICAL THIS IS OF EVERYTHING THE FDA DOES ALL THE TIME.

[edit: people have asked me for more details about the fish oil story - I’ve written it up at more length here]

…anyway, The Atlantic says the FDA needs to be stricter and wait longer to approve things, and I am against this.

But How Can The FDA Be Too Strict And Not Strict Enough At The Same Time?

Very easily! Lots of things are too strict and not strict enough at the same time! I wrote a whole article on this! It sounds like it should be paradoxical, but it isn’t!

Consider the police. I once had a psychotic patient threaten to kill a family member. I reported it to the police. They asked me where they could find the patient, I said I dunno, maybe at his house or something? I called them back a few hours later asking how things were going, and they said they had knocked on the patient’s door and he hadn’t answered, so they felt like they had discharged their duty in this matter and were going to close the case. I asked if maybe they could go back to the patient’s house and try again later, and they acted like I was asking them to hunt down Osama bin Laden in the caves of Tora Bora or something.

I think this is a pretty typical experience a lot of people have dealing with the police, especially in the Bay (unofficial motto: “San Francisco - Where Crime Is Legal”). A friend had a really scary stalker, and kept reporting him to the police, and the police’s answer, phrased only slightly uncharitably, was “Have you, as of now, already been murdered by this person? No? Then stop wasting our time.” My friend was left with the feeling that the police could have been a little stricter or more proactive.

On the other hand, you get stories where police think someone might be growing marijuana or whatever, they gather a SWAT team complete with surplus tanks from Iraq, they break down the person’s door, and they shoot everyone involved because “it looked like they might be reaching for a gun”. If anyone survives, the police stick them in prison for ten years for “resisting arrest” or something. Maybe these people are left with the feeling that police could stand to be a little less strict and less proactive.

So which is it? Are the police too strict, or not strict enough? I don’t think there’s a good answer to this question. I would rather say the police are bad at their job. Maybe not literally, because being a policeman is hard, and I don’t want to judge them until I’ve walked a mile in their jackboots. But something has gone wrong, something more fundamental than just they lean too hard in one direction or another, something that requires a solution more complicated than moving a Police Intensity Lever from LESS to MORE.

My own profession is little better, as I’ve discussed before. Many people get diagnosed with psychiatric diseases and pumped full of medication when they shouldn’t be. Other people don’t get diagnosed with psychiatric diseases or treated with medication even when they desperately need it. Moving the Psychiatry Lever from MORE to LESS or vice versa might accomplish something, but it’s clearly not the whole story.

The FDA has a very hard job, and handles it with a level of badness that makes police officers look like one of those omnicompetent fictional intelligence agencies by comparison. I mean, if anyone ever gives you control of the FDA Lever, you should definitely absolutely for the love of God push it as far toward LESS as it is possible for it to go - I think this is what all those cost-benefit analyses the epidemiologists and economists publish are telling you, and it’s also what my common sense and ethics tell me. But I have to admit that this isn’t costless. It’s going to let a lot of crappy drugs like aducanumab get through and give people false hope.

(a problem which, I can’t stress enough, is not as bad as causing hundreds of thousands of people to die of easily preventable causes. Please move the lever all the way to LESS. Even if it’s already there, see if maybe you can push it a few micrometers further.)

Is there some better solution?

Sympathy For The Devil

I want to stress that, despite my feelings about the FDA, I don’t think individual FDA bureaucrats, or even necessarily the FDA director, consistently make stupid mistakes. I think that given their mandate - approve drugs that definitely work, reject ones that are unsafe/ineffective, expect people to freak out and demand your head if any unsafe/ineffective drug gets through, nobody will care no matter how many lifesaving treatments you delay or stifle outright - they’re doing the best they can. There are a few cases, like aducanumab, where it seems like they move a little faster than that mandate would suggest, and a few other cases, like infant nutrient fluid, where they move a little slower. But basically they are fulfilling their mandate to the best of the ability of the very smart people who work there.

And it’s hard to even blame the people who set the FDA’s mandate. They’re also doing the best they can given what kind of country / what kind of people we are. If some politician ever stopped fighting the Global War On Terror, then eventually some Saudi with a fertilizer bomb would slip through and kill ~5 people. And then everyone would tar and feather the politician who dared relax our vigilance, and we would all restart the Global War On Terror twice as hard, and drone strike twice as many weddings. This is true even if the War on Terror itself has an arbitrary cost in people killed / money spent / freedoms lost. The FDA mandate is set the same way - we’re open to paying limitless costs, as long as it lets us avoid a very specific kind of scandal which the media will turn into 24-7 humiliation of whoever let it happen. If I were a politician operating under these constraints, I’m not sure I could do any better.

So the long-term solution is to become a different kind of country and different sorts of people - eg raise the sanity waterline. This will have nice side benefits like also ending the global war on terror. But until then, are there any small changes that would help around the edges?

Unbundle FDA Approval

The most plausible small change I can think of is to unbundle FDA approval.

Consider: everyone knows the evidence for aducanumab is poor. You know it. I know it. Scientists know it. Journalists know it. So why exactly are we expecting lots of people to spend $50,000/year on this drug?

The answer is: there are complicated laws around what insurance companies have to cover, and FDA approval is a big part of them. I don’t understand the exact legalities of this, but it seems like Medicare and Medicaid have to cover anything the FDA approves. The situation with private insurances is more complicated but still not great. My guess is that if a private insurance covers an Alzheimers patient, and a doctor says that aducanumab is “medically necessary” for that patient, and the insurance doesn’t cover it, and the patient’s Alzheimers gets worse, that patient can sue the insurance company for failing to provide standard of care. What makes it standard of care? Because the FDA approved it, and Medicaid and Medicare are giving it to their patients, of course!

(Why would doctors say this useless drug is “medically necessary”? Well, some large fraction of doctors are stupid and believe whatever drug companies tell them. Some other fraction - including me - are pushovers when a really sad-looking patient begs them for the one thing they believe will help. Once an insurance company agrees to cover a drug, neither the patient nor the doctor has any incentive to avoid it just because it costs $50,000. At this point not even the most optimistic person expects forbearance by doctors to be very helpful here.)

So now that the FDA has approved this stupid useless drug, lots of doctors will prescribe it, everybody will be forced to pay for it, and the US health system will become even more prohibitively expensive. Not to any specific recognizable party who can notice or object, of course. But in general.

So when I talk about unbundling FDA approval, I mean that instead of the FDA approving the following bundle of things…

  1. It’s legal for doctors to prescribe a drug.
  2. It is mandatory for insurances to cover a drug.
…the FDA can say one of those two things, but not the other.

Right now these decisions are so charged because, if something doesn’t have FDA approval, then even someone who desperately wants a medication, and has researched it very hard, and is being treated by the world’s top specialist in their condition, and is willing to pay for it with their own money - can’t get it. But if something does have FDA approval, then any moron can get it, just because they saw a TV ad saying it was the hot new thing, and the government/insurance/other patients/Yagmuk will be forced to cover the entire price.

There’s a third thing it might be helpful to unbundle, one we’re already secretly unbundling. When the FDA delayed COVID vaccine approval, or refused to approve various brands of COVID vaccine, or suspended the distribution of COVID vaccines for bad reasons, it always had the same excuse - what if there was a side effect? The problem isn’t that people might die - people were definitely dying from their decision to delay/ban vaccines. The problem was that people might stop trusting the FDA. They would say “the FDA allowed me to take this drug, but it was dangerous, screw them, I will never take an FDA-approved drug again in my life and only use homeopathy from now on.” The FDA and medical policymakers live in terror of this scenario. They feel like if they ever allow even one bad drug through, then in the eyes of the public all kinds of anti-vax hysteria and vaccines-cause-autism bullshit will be retroactively justified, and public health officials will never have any authority ever again. If you model all FDA/CDC/etc policy as an attempt to avert this outcome, your predictions will be right more often than not.

This is another thing I’m pretty sympathetic about - social trust is a valuable resource. But it also means that public policy will forever be held hostage to the whims of the stupidest person around. Every time someone sneezes, the FDA will ban whichever brand of COVID vaccine they got - because if they didn’t, then stupid people might believe the FDA didn’t take vaccine side effects seriously, and then those stupid people would stop getting vaccines and die. This policy has led to our current situation, where either everyone has to be miserable because of stupid people’s choices (eg everyone has to wear masks forever because a few people won’t get vaccinated) or we get a strong anti-freedom lobby because allowing anyone any freedom means that the rest of us have to suffer for their stupid choices.

So maybe a third thing we could unbundle is:

        3. The FDA is staking its entire reputation on this drug.

I think that unbundling is what the FDA is trying to do right now with COVID vaccines. They approve them for emergency use. If future evidence proves the vaccines safe, then good, we got them. If future evidence proves the vaccines unsafe, then the FDA can say “yeah, well, technically we never said they were safe, so this doesn’t mean we’re ever wrong”. If some moron says “You say I should get my MMR vaccine, but, you also said I should get my COVID vaccine, and later it turned out that COVID vaccines make your eyes fall out and go rolling around the room”, then the FDA can say “Yeah, but we only gave emergency provisional approval to the COVID vaccine, whereas we’ve given complete permanent approval to the MMR vaccine.”

Maybe it’s expecting too much of the American people, but I wish the FDA could lean into this strategy. Grant drugs one-star, two-star, etc approvals. Maybe one-star would mean it seems grossly safe, the rats we gave it to didn’t die, but we can’t say anything more than that. Two-star means it’s almost certainly safe, but we have no idea about effectiveness. Three-star means some weak and controversial evidence for effectiveness, this is probably where aducanumab is right now. Four-star means that scientific consensus is lined up behind the idea that this is effective, this is probably where the COVID vaccines are right now. Five star is the really extreme one where you’re boasting that Zeus himself could not challenge the effectiveness of this drug - the level of certainty around MMR vaccine not causing autism or something like that.

Then you could attach different legal rights and requirements to each of those. Maybe the world’s top specialists could start prescribing a drug once it has two-star approval, regular doctors could prescribe it with three-star, drug companies can’t start advertising it until it’s four-star, and insurance companies are mandated to cover it once it’s five-star.

People are really scared of this solution, because it introduces choice into this system. If you say that insurance companies are allowed to cover a certain drug, but not forced to do so, then different insurances will cover different drugs, and you’ll have the usual capitalism / free market thing. Patients will have to choose which insurance to get without necessarily knowing very much about medicine, and maybe companies will try to trick or exploit them, and maybe the patients will make the wrong choice.

This is the nightmare scenario that the existing US health system exists to avoid. I know you can think of lots of different ways that changing things could go wrong, and so can I. But I can’t stress enough how often the current system results in things going wrong that nobody thought of because the things are too stupid for anyone to even imagine they were possible.

Final Thoughts

In conclusion, and contra The Atlantic, the FDA approving aducanumab is not very much like global warming at all. It is more like global warming in an alternate universe, where the government sometimes approves pollutants, and then everyone is forced to emit millions of tons of them whether they want to or not. Sometimes the government orders people to build a coal plant in the middle of the desert where nobody lives, a coal plant that isn't even connected to anything and just burns lots of coal without producing any electricity. But also, elderly people frequently freeze to death because the government refuses to give them permission to heat their house in the middle of winter. There is lively debate over whether the government should build more useless coal plants or let more elderly people freeze to death, and anyone who thinks there should be a better way of doing things is condemned as some kind of fringe libertarian. I really cannot stress enough how accurate this metaphor is or how much everything in the medical system is like this.