Monday, January 31, 2022

Another example of why you should no longer support the ACLU

 Here is Jonathan Turley on the latest example of the ACLU doing the opposite of what it is supposed to.

Look into the New Civil Liberties Alliance as a better way to support freedom.

-----------------------------------------

“A Prologue to a Farce or a Tragedy or Perhaps Both”: ACLU Opposes Transparency Law on Educational Materials

The American Civil Liberties Union (ACLU) this week opposed a model law being introduced in over a dozen states. That is not itself uncommon. The ACLU historically opposed laws that denied free speech and other rights under the Constitution, a legacy that I have long cherished and supported. However, this is a transparency law that simply requires teachers and schools to post the educational materials used in classes online. It is meant to assist parents in tracking the education of their students and the priorities of their school systems. Yet, the ACLU has declared the law to be so threatening and chilling that it has officially opposed its enactment in any state.

For those of us who have long supported the ACLU, the organization has changed dramatically in the last ten years into a more political organization. Those critics recently included former ACLU former head Ira Glasser, who questioned whether the ACLU still maintains its defining commitment to free speech values.

In recent years, the ACLU also struggled with controversies like an ACLU staffer encouraging activists to “break” Sen. Krysten Sinema (D., Ariz.) and another staffer opposed the admission into college of Nicholas Sandmann. At points, it has become a parody of its own self like celebrating the legacy of Ruth Bader Ginsburg by editing her words as offensive.

The greatest concern for those with a long association with the ACLU has been its shift on constitutional rights. In addition to its eroding support for free speech values, it now opposes due process rights when they support the wrong people — a striking departure from the traditional apolitical stance of the group. It has particularly opposed the rights of one particular group: parents. On parental notification laws, the ACLU has brushed aside the rights of parents to be informed (let alone have a say) in medical decisions for their children, including abortions.

This latest position is particularly baffling. I understand the the impetus of this law was opposition to racially divisive materials, including Critical Race Theory (CRT) material. Putting aside the effort to dismiss objections by technically claiming that CRT is not taught outside of law schools, parents are objecting to material that focuses on teaching concepts of white privilege and supremacy that stigmatizes and demonizes identity groups.

Just this month, kids in Fairfax County public schools (where all of my children have attended or currently attend) were given a “privilege bingo” exercise. The exercise titled “identifying your privilege” had students pick boxes like “military kid,” “white,” “male,” “cisgender,” “Christian,” to establish identity and privilege.

Assistant Superintendent Douglas A. Tyson said that the exercise was designed for students to determine whether authors have “privilege that may or may not be present in the work” and then to reflect on their own biases based on their race as well as economic and educational status. (Notably, Fairfax later responded to the inclusion of the military family box but not the other privileged identities like being Cis or Christian or white).

The controversy over the bingo game came after a statewide race for governor that centered on the teaching of such issues of racial privilege and identity politics. Gov. Youngkin was elected in part on a pledge to oppose such material.

This brings us back to the model law. The laws passed in states like Pennsylvania are not CRT prohibitions but mandates to post teaching materials, syllabi, and scholastic achievement information online. It is a level of transparency that is common in college and graduate schools, including my own classes.

Greater transparency on public education (like other government programs) would seem a good thing. In the category of “perfecting democracy,” information is generally a good thing. That is why we have freedom of information acts on the federal and state levels. Yet, school districts and teachers have opposed such FOIA requests in court. As a result, parents face barriers in obtaining information needed not just in making decisions about their children’s education but also in making decisions as voters. School boards are elected by the voters who have a right and a need for such information. For those who commonly decry attacks on democracy, this is an effort that facilitates the democratic process. Parents have a say in how their public schools are run, which is why these boards positions are subject to elections.

Yet, the ACLU is opposing greater transparency, declaring “Curriculum transparency bills are just thinly veiled attempts at chilling teachers and students from learning and talking about race and gender in schools.”

My immediate reaction to that statement was to ask why the ACLU is now holding forth on such political and social issues. It is not claiming that these laws are unconstitutional — they are not. It is once again using the organization to support a political cause rather than a constitutional or civil right. I understand that the ACLU is not limited to constitutional questions but it is also an organization that was meant to function as an apolitical defender of civil liberties.

James Madison is often quoted for his statement that “a popular government, without popular information, or the means of acquiring it, is but a prologue to a farce or a tragedy; or, perhaps both.” What is not widely known is that Madison made that statement in response to a letter from William Taylor Barry, a Kentuckian who wrote him about the effort to create primary and secondary educational programs in his state. Information remains the paramount value in public education as well as the transparency needed to secure it.

Saturday, January 29, 2022

The sad state of academic freedom or, if you prefer, sanity

 Here is a link to the filing by Professor Jason Kilborn vs. the University of Illinois Law School's demonstration of its abject submission to the Cancel Culture.

Mission creep – a dangerous chronic disease of Government

 Here is George Will in the Washington Post.

GW is on target.

The broader message is that the outlook for freedom continues to worsen.

All too often "righting wrongs" is itself wrong.

----------------------------------------

Today’s Federal Reserve illustrates this axiom: When a government entity cannot, or would rather not, adequately perform its primary function, or when it feels that its primary function is insufficiently grand, the agency will expand its mission, thereby distracting attention from its core inadequacy.

Next Thursday, the Senate Banking Committee will hold confirmation hearings for two presidential nominees to the Federal Reserve Board — Lisa Cook, to a seat on the Fed’s Board of Governors, and Sarah Bloom Raskin, to be vice chair of the Board for bank supervision and regulation. Both would ratify the current Fed’s penchant for mission creep — actually, mission gallop. The Senate should tell both to express their abundant political passions through more suitable institutions.

Cook is a Michigan State University professor whose peer-reviewed academic writings pertinent to monetary policy are, to be polite, thin. The White House noted that she “is on the Board of the Directors of the Federal Reserve Bank of Chicago.” She was put there two weeks before Biden proposed promoting her. But she strokes progressivism’s erogenous zones: She appears to favor racial reparations. And, evidently, defunding the police: When a University of Chicago economist criticized this idea in 2020, she termed the criticism “racial harassment” for which he should be fired as editor of the Journal of Political Economy and denied “access” to students.

Putting Cook on the Fed’s board would be a travesty akin to President George W. Bush’s 2005 nomination of Harriet Miers to the Supreme Court. Raskin, of the Duke University School of Law, has different monomania: “climate risk” threatening the financial system.

Does she mean huge, abrupt and unpredicted weather events that will (herewith the flavor of her rhetoric) “flatten” the economy and “grind it to dust”? Or decades-long climate changes that she knows the system cannot adapt to?

As part of a “broader reimagining of the economy,” the professor favors starving traditional energy companies that are hungry for credit: She advocated making fossil fuel companies ineligible for participation in the lending overseen by the Fed under the 2020 Cares Act. She anticipates “climate risk data” guiding the Fed’s asset purchases. And she hopes for “reimagined” fiduciary duty rules. And capital allocation toward government-approved, non-carbon, non-fossil fuel investments. There are many definitions of socialism, but its essence always is government, meaning political, allocation of capital.

As the Fed in Washington wades waist-deep into politics, trickle-down politicization spreads into its regional banks. For example, Neel Kashkari, president of the Federal Reserve Bank of Minneapolis, is campaigning for an amendment to the Minnesota Constitution that would create a “fundamental right to a quality education” and declare it a “paramount duty of the state to ensure quality public schools.”

This is not partisan politics because it does not involve support for a particular party, but it is politics, for two reasons: It involves a policy agenda of public spending. And its patent motivation is to shift political power away from Minnesota’s legislature, toward courts that, by defining the word “quality,” would impose policies, thereby elbowing aside the legislature.

Kashkari’s agitating for the amendment violates the Minneapolis bank’s code of conduct, which says that “although an employee may participate or may become involved in issues of general public concern or debate, the employee’s association with the Bank must not be publicized in connection with any political activity.” The Minneapolis Fed’s website proclaims the bank a “partner” of those campaigning for the amendment, which would enable lawsuits through which courts can construe “quality” public education as something requiring more money than the legislature is appropriating.

A Minneapolis Fed spokesperson blithely says Minnesota’s educational disparities are “unacceptable” because they “impede” the Fed’s “mandate to achieve maximum employment.” There you have it: The Fed, having slipped the leash of its primary job — to preserve the currency as a store of value — now claims a roving commission to do whatever it wants.

The Federal Reserve is an admirable reservoir of talent: Its economists constitute one of the world’s finest economic “faculties.” Fed Chair Jerome H. Powell should be mortified that political activists like Raskin and Cook are being insinuated into the Fed’s operations. And that Kashkari, who was once California Republicans’ gubernatorial nominee, feels free to bend the Fed to his political activities.

When you ask for trouble, the world often obliges. Powell has invited trouble by sailing the Fed obedient to the winds of Washington fashion. Under him, the Fed is becoming the government’s Swiss Army knife — an all-purpose tool. But with too many purposes to do its primary job adequately.

So, who helped Blacks the most?

Here is Jason Riley in the Wall Street Journal.

JR is on target.

Trump hatred got us Biden incompetence. I prefer competence to decency if I have to make a choice. Which do you prefer - the most competent surgeon or the nicest?

---------------------------------------------

Joe Biden began his presidency with a promise to advance equity, which means favoring some races and ethnicities over others to shrink outcome disparities. Like many of his fellow liberal Democrats, Mr. Biden is tethered to the belief that black upward mobility won’t happen without coddling and special treatment from the government. Donald Trump’s record complicates such claims.

One of the most underreported stories of the Trump presidency is the extent to which black economic fortunes improved. The mainstream media presented Mr. Trump daily as a bigot whose policies would harm the interests of racial and ethnic minorities. Meantime, black economic advancement occurred to an extent unseen not only under Barack Obama but going back several generations—until the pandemic shutdowns brought progress to a halt.

Over the first three years of Mr. Trump’s presidency, blacks (and Hispanics) experienced record-low rates of unemployment and poverty, while wages for workers at the bottom of the income scale rose faster than they did for management. Whether that was the goal of the Trump administration or an unintended consequence is a debate I’ll leave to others. But there is no doubting that the financial situation of millions of working-class black Americans improved significantly under Mr. Trump’s policies.

Well into the Trump presidency, Mr. Obama continued to take credit for the strengthening economy. “By the time I left office, wages were rising, uninsurance rate was falling, poverty was falling, and that’s what I handed off to the next guy,” he told an audience in the fall of 2018. “So, when you hear all this talk about economic miracles right now, remember who started it.”

Throughout his presidential campaign, Mr. Biden likewise claimed credit for robust economic growth. “Trump inherited the longest economic expansion in history from the Obama-Biden administration,” Mr. Biden told Bloomberg News in June 2020, then added that Mr. Trump had “turned his back on the middle class” by focusing on tax cuts for corporations and the wealthy.

Mr. Obama’s and Mr. Biden’s arguments are obviously self-serving. But does the evidence back them up? It’s true that the unemployment rate fell 5.2 percentage points, from 9.9% to 4.7%, between the end of 2009 and the end of 2016. It’s also true that, over the same period, real median household income (expressed in 2020 dollars) grew by some $3,500, to $63,683 from $60,200. These trends continued between 2017 and 2019. Yet simply noting that unemployment already was falling, and incomes already rising, when Mr. Trump took office obscures the significance of what happened over the next three years. These trends not only continued but accelerated, and they did so contrary to widespread expectations at the time.

By the end of 2016, the consensus among officials at the Treasury, the Federal Reserve and the Congressional Budget Office was that the economy essentially had reached full employment and couldn’t grow faster. During Mr. Obama’s final year in office, 2016, economic growth slipped to 1.6% from 3.1% in 2015. The rate at which the economy was expanding declined by almost half over a single year.

Mr. Trump inherited a U.S. economy that was slowing down, and there was widespread concern about another recession. Lawrence Summers, who served as Treasury secretary under President Clinton and director of the National Economic Council under Mr. Obama, said that there was a 60% chance that the economy would dip into recession within a year. For 2017, 2018 and 2019, the Fed had projected that unemployment would hover between 4.4% and 4.9% and wouldn’t fall below 4.1%, and that economic growth would remain between 1.7% and 2.2%.

Instead job growth accelerated, unemployment kept falling, and economic growth improved. In early 2017, the new president set about implementing what he had promised during the campaign: lower taxes and lighter regulation. He nominated Kevin Hassett, who had published research showing how corporate taxes depress wages for manufacturing workers, to lead the Council of Economic Advisers. He urged Congress to reduce the tax rate on corporate profits, which at 35% was one of the highest in the developed world.

Along with the push for tax reform, the administration reduced regulations that it argued were weighing on economic growth. A Cato Institute analysis of regulatory activity in the first two years of the Obama and Trump administrations counted a total of 6,793 new rulemakings for Mr. Obama and 4,310, or 36% fewer, for Mr. Trump. More significantly, among major regulations that impose a cost of $100 million or more on the private sector, the tally was 176 for Mr. Obama and only 90, just over half as many, for Mr. Trump.

Congress passed the Tax Cuts and Jobs Act in December 2017. The top corporate rate fell to 21% from 35%, and companies were given an opportunity to repatriate cash held overseas at a tax rate of 15.5%. Taxes on wages and investment also fell. It was the most significant tax-code reform in 30 years, and the dividends were almost immediate. By the end of January 2018, more than 260 businesses—including major employers such as Walmart, FedEx, and 3M—had announced wage and salary increases, bonuses, and 401(k) match increases going to at least three million workers because of the new law.

Gross domestic product, which had grown only 1.6% in 2016, climbed 2.2% in 2017 and 2.9% in 2018. As remarkable was the change in gross private domestic investment, which is a measure of how much money businesses invest within their own country. It had declined by 1.3% in 2016, but grew by 4.8% in 2017 and 6% in 2018. Lower taxes and lighter regulations were intended to spur economic growth, and business responded accordingly.

Part of what made the Trump boom unique, however, is who benefited the most. The economy grew in ways that mostly benefited low-income and middle-class households, categories that cover a disproportionate number of blacks. In 2016 the percentage of blacks who hadn’t completed high school was nearly double that of whites—15% vs. 8%—and the percentage of adults with a bachelor’s degree was 35% for whites and only 21% for blacks.

These education gaps are reflected in work patterns. Blacks are overrepresented in the retail, healthcare and transportation industries, which provide tens of millions of working- and middle-class jobs. In 2019, 54% of black households earned less than $50,000 a year, versus 33% of white households. At the other end of the income distribution, slightly more than half of all white households (50.7%) earned at least $75,000, compared with less than a third (29.4%) of black households. What this means is that reductions in income inequality can translate into reductions in racial inequality, which is what the country experienced in the pre-pandemic Trump economy.

Between 2017 and 2019, median household incomes grew by 15.4% among blacks and only 11.5% among whites. The investment bank Goldman Sachs released a paper in March 2019 that showed pay for those at the lower end of the wage distribution rising at nearly double the rate of pay for those at the upper end. Average hourly earnings were growing at rates that hadn’t been seen in almost a decade, but what “has set this rise apart is that it’s the first time during the economic recovery that began in mid-2009 that the bottom half of earners are benefiting more than the top half—in fact, about twice as much,” CNBC reported.

Citing a graph included in Goldman’s analysis, CNBC added that the “trend began in 2018”—the first year that the corporate tax cuts were in effect—“and has continued into this year and could be signaling a stronger economy than many experts think.”

Most other media outlets ignored the Goldman findings, but a Journal editorial cited Bureau of Labor Statistics figures and reported that during the first 11 quarters of the Trump presidency, wages for workers at the bottom rose more than twice as fast as during Mr. Obama’s second term. Over the same period, less-educated workers, such as those with just a high-school diploma or only some college, saw their wages rise at triple the rate of during Mr. Obama’s second term.

Giving Mr. Obama most of the credit for the better economic outcomes that occurred after he left office is a stretch. It’s also somewhat disingenuous. Mr. Trump’s critics are presenting a kind of heads-I-win-tails-you-lose analysis. They are eager to credit Mr. Obama for the economy’s pre-Covid performance under Mr. Trump, but who believes they would have blamed Mr. Obama if things had gone sideways as they predicted?

The story of black economic advancement in the Trump years deserves wider notice. Liberals contend that wealth redistribution and racial preferences are the best way to facilitate upward mobility, yet the previous administration’s focus on economic growth had a far more positive impact on the lives of millions of working-class Americans, a disproportionate number of whom were black. Racial inequality narrowed on Mr. Trump’s watch, and much of the media were too busy agitating against him to take note or give credit where due.

Saturday, January 22, 2022

More evidence that media journalism is largely dead

 Here is Jonathan Turley with another example of common media behavior that suggests that media journalism has been replaced with rage.

More of the "woke" and "cancel" culture.  It must be stopped before our society is lost.  Speak out against it.

------------------------------------------

“She Can Write Any #$@!% Thing She Wants”: Totenberg Slams NPR’s Own Ombudsman Over Debunked Gorsuch Story

Nina Totenberg slammed Kelly McBride, the ombudsman for National Public Radio (NPR), for concluding that she should rewrite her story accusing Neil Gorsuch of refusing to wear a mask to protect his colleague, Sonia Sotomayor. McBride did not suggest a correction but merely a “clarification.” Totenberg responded to The Daily Beast and declared that McBride “can write any goddamn thing she wants, whether or not I think it’s true.” Now, days after rare public denials by all three referenced justices, many in the media who denounced Gorsuch have followed suit. They also refuse to clarify or address their own attacks on the justice in light of the denials from the Court. Notably, Gorsuch was the subject of another false story connected to the same oral argument. Many also did not correct that reporting. (For full disclosure, I testified before the Senate in support of Gorsuch’s confirmation).

The philosopher Alexis de Tocqueville once said that “there is hardly a political question in the United States which does not sooner or later turn into a judicial one.” That is certainly the case with the Supreme Court this month. After striking down the Biden vaccine mandate for workplaces, the Court found itself embroiled in the raging question over masks in the workplace after the NPR story.

Nevertheless, Totenberg pounced at the chance to (again) pummel Gorsuch:

“Chief Justice John Roberts, understanding that, in some form asked the other justices to mask up. They all did. Except Gorsuch, who, as it happens, sits next to Sotomayor on the bench. His continued refusal since then has also meant that Sotomayor has not attended the justices’ weekly conference in person, joining instead by telephone.”

It did not matter that Totenberg had previously attacked Gorsuch. The media showed the same hair-triggered tendency with previously debunked stories.

Gorsuch did appear in the last argument without a mask. Ironically, if he had simply worn the commonly used cloth mask, there would have been no outcry even though the masks do not appear to block these variants and even CNN’s experts are calling the cloth masks “little more than facial decorations.”

It is also not clear that Sotomayor even knew whether anyone or everyone would wear masks at the argument. She had previously stated an intention to participate remotely. Given the lack of protection from most masks (including reused or contaminated N95 masks), Sotomayor likely felt the risk was not worth taking. Yet, Totenberg states as a fact that Gorsuch’s “continued refusal since then has … meant that Sotomayor has not attended the justices’ weekly conference in person, joining instead by telephone.”

None of this mattered as the media ran with the story of Gorsuch forcing Sotomayor to stay virtual and refusing to yield to Roberts’ alleged encouragement to wear a mask.

MSNBC’s Nicole Wallace declared Gorsuch guilty of “anti-mask insanity.” Her colleague Joy Reid accused Gorsuch of virtually standing Sotomayor up in front of a Covid firing squad for his personal enjoyment. Reid declared that Gorsuch was “risking the life of your colleague” and was a “rotten co-worker,” “dangerous to be near in a pandemic,” and “tonight’s absolute worst.” Reid even declared on the air that Gorsuch “loves COVID — which makes him the perfect Republican”

Rolling Stone ran with the story “Neil Gorsuch Stands Up for His Right to Endanger Sonia Sotomayor’s Health,” and added “the liberal Supreme Court justice is diabetic and didn’t want to sit next to justices who weren’t wearing masks. Her conservative colleague didn’t care.”

Former senator Claire McCaskill tweeted:

So glad I voted no on this jerk. What kind of guy does this? I could tell in my meeting with him that he thought he was better than everyone else, more important, smarter. Ugh. #Gorsuch

The Daily Kos declared

“it is hard to imagine a bigger shit. But we should not be surprised…Most Americans will find his selfishness incredible, but it is typical of his kind. One trait common to every conservative is a sociopathic lack of empathy.”

Elie Mystal, who has written for Above the Law and the Nation, tweeted

Confirmation of what we all already knew. Whatever you think about masks, Gorsuch, who sits next to Sotomayor at work, just decided to be a dick to a colleague.

Then came the denial of all three justices.

Chief Justice John Roberts also issued a statement that it was false, as claimed, that he asked any of his colleagues to wear masks on the bench. Indeed, previously the justices did not wear masks during arguments. Moreover, Gorsuch is routinely shown wearing a mask around the courthouse.

The joint statement of the two justices insists that Totenberg’s account is entirely false:

“Reporting that Justice Sotomayor asked Justice Gorsuch to wear a mask surprised us. It is false. While we may sometimes disagree about the law, we are warm colleagues and friends.”

Notably, these are three jurists who interpret the Constitution, statutes, treaties, and agreements for a living. All three read the Totenberg report and felt compelled to issue rare public statements to refute the story.

NPR’s ombudsman found the story in need of clarification and their interpretation of the story was shared by everyone who heard the report (though Fox News’ Shannon Bream quickly and correctly challenged the report with her own sources denying the story). They understood NPR as saying that Gorsuch refused to wear a mask after Roberts asked all of his colleagues to do so to protect Sotomayor. That interpretation was readily apparent by the ragefest on cable news and the Internet as media figures lined up to denounce Gorsuch as a type of viral homicidal maniac.

In response to the justices, Totenberg insisted that she never said that Gorsuch was directly asked by Roberts to wear a mask and did not say that he rebuffed a request from Sotomayor. However, Totenberg pushed the false narrative of the story as it went viral. Totenberg tweeted the following description of her story: “Gorsuch refuses to mask up to protect Sotomayor.”

Strangely, Totenberg seemed to argue that her much promoted piece was really not much news at all. Roberts may not have asked anyone to wear masks and Sotomayor’s remote participation may have had nothing to do with Gorsuch. Indeed, even if Gorsuch wore the common cloth mask, it would not, according to various studies, afford her real protection against the variant. The problem how virtually everyone understood her story as evidenced by the coverage.

NPR stood by the story even though its own ombudsman suggested that it should be clarified. Totenberg immediately ran with the NPR support and backhanded the ombudsman:

Totenberg: NPR stands by my reporting.

NPR reporter David Gura went even further and suggested that the justices might simply be lying and we should not take their account over that of Nina Totenberg. Gura tweeted “I [sic] surprised at how many Supreme Court correspondents I admire are passing along a statement from two justices that is at best false without any context whatsoever.”

Totenberg went on to say that, as a journalist, she did not even read the views of NPR’s own ombudsman review: “I haven’t even looked at it, and I don’t care to look at it because I report to the news division, she does not report to the news division.”

The NPR story is the latest example of rage politics and how the underlying truth is immaterial to the narrative.

I wrote earlier that it really does not matter that the story was false or misleading. As expected, the media simply moved on without admitting errors. It is a pattern that we have seen repeatedly. We have discussed the false reporting in controversies ranging from the Lafayette Park protests to the Nicholas Sandmann controversy to the Russian collusion scandal to cases like the Rittenhouse trial.

We are left with a zen-like “tree-falling-in-the-forest” paradox: it is not fake news if the news will not admit to faking it. That fact is that people like an ombudsman can “write any goddamn thing” they want but, if it is not reported, it matters little. Gorsuch “loves Covid” and wants to kill a liberal colleague . . . whether he does or not.

Friday, January 21, 2022

Why I stopped supporting the ACLU

 I used to support the ACLU because I wanted to support legal action to defend civil liberties, i.e., freedom. But the people that ran the ACLU couldn't resist ignoring civil liberties they didn't like and using the ACLU's resources to further their political agenda. For example, they would not support the Second Amendment and they used their resources to back politicians they liked.

I now support the New Civil Liberties Alliance instead.

Organizations that cannot "stick to their own business" provide a mix of "services" that is determined by the people who control the organizations.  There is little likelihood that the mix is "optimal" for anyone.  In contrast, individuals can obtain an optimal mix of services (or a lot closer to it) by supporting a mix of organizations that each focus narrowly. Organizations that focus narrowly offer more "choice" than organizations that do not.  In that sense, the former are more valuable than the latter.

Here is an example that illustrates the problem - in this case made worse by the Government and the Courts.

Avraham Goldstein in the Wall Street Journal: "I'm Stuck With an Anti-Semitic Labor Union".

-----------------------------------------

As an observant Orthodox Jew born in the Soviet Union, I’m no stranger to resisting intimidation at significant cost. In 1971 my family petitioned authorities to leave the country after we suffered anti-Semitic harassment and physical assault. Our petition was denied and we were forced to live and work in that hostile environment for 15 years. Eventually, we settled in Israel, and I later established an academic career in New York. But the bigotry I fled has caught up with me.

I am a tenured professor of mathematics at the City University of New York. I joined the faculty union, the Professional Staff Congress, nearly 10 years ago. In June, union officials—who speak for me under state law—issued a resolution I, and many of my colleagues, view as anti-Semitic.

The resolution condemned “the continued subjection of Palestinians to the state-supported displacement, occupation, and use of lethal force by Israel” and required chapter-level discussion of possible union support for the anti-Israel boycott, divestment and sanctions movement. It equated the policies of Israel, of which I am a citizen and where I still have family, with apartheid. Many of my Jewish colleagues and I were outraged.

I had paid thousands of dollars in union dues for workplace representation, not for political statements or attacks on my beliefs and identity. I decided to resign my union membership and naively thought I could leave the union and its politics behind for good. I was wrong. Union officials refused my resignation and continued taking union dues out of my paycheck. But those weren’t the issues that led me and five of my colleagues to sue them.

Under New York law, even if I resign from the union, I will never be free to bargain or speak for myself when it comes to matters of my employment as a CUNY professor. I am forced to rely on a union that says anti-Semitic, hateful things about Israel to negotiate on my behalf.

In New York, public employees who aren’t union members are still forced to accept the union as their sole representative for collective bargaining. The Supreme Court’s 2018 ruling in Janus v. Afscme acknowledged this power imbalance—effectively a state-sanctioned monopoly—which also exists in most other states. But New York takes this gift to union officials a step further.

In Janus, the court ruled that public-employee unions can’t force nonmember employees to pay union fees as a condition of employment. Knowing that many unions would still act as nonmember employees’ exclusive representatives, the court underscored that union officials have a legal duty to represent all employees fairly.

But right before Janus was decided, New York union officials convinced lawmakers to amend New York’s public employment statute, the Taylor Law. This amendment explicitly reduces what unions must do for nonmember employees, effectively undermining the protections the court in Janus called a “necessary concomitant” to the power of exclusive representation. In New York’s eyes, nonmembers like me legally can be treated as an underclass, deserving of lesser services than our union-member colleagues.

When my family fled the Soviet Union, we expected never again to be treated as second-class citizens. Now I have a choice: Disrupt my life and damage my career again or rely on the constitutional protections that set America apart from most other countries on earth. I’m done running.

With the help of the Fairness Center, a nonprofit law firm, and attorneys from the National Right to Work Legal Defense Foundation, I joined my colleagues in filing a federal lawsuit in January to vindicate our First Amendment rights of free speech and association. New York law shouldn’t provide cover for unions at the cost of individual freedom. Nor should it countenance forcing Jews to associate with a union that doesn’t want them around.

Sunday, January 16, 2022

Stay the f**k away from me - a soliloquy from a college professor

 Here is a link to the professor's video.  It is hilarious - although too many people will take it too seriously and get upset.  To those people, I say - get a life.

Be sure to watch the video before reading Jonathan Turley's take below.

-----------------------------------------------------------

“Stay the f**k away from me”: Professor Placed On Leave After Calling Students “Vectors of Disease” and Promising Random Grading.

Professor Barry Mehler at Ferris State University in Michigan clearly does not want to return to in-person classes. Appearing in a video with a space helmet, Mehler went full Howard Beale in a video in which he called his students “vectors of disease” and tells them to “stay the f**k away from me.” While many have declared Mehler completely insane, his video may be as clever as a covid-phobic fox. Let me explain.

Mehler teaches the history of science and is the founder and director of the Institute for the Study of Academic Racism.

In the video below, Mehler lashes out at the requirement that he return to in-person classes despite the risk to his health as an older person. He is profane, insulting, and taunting.

He is also being clearly sarcastic and waggish at points. For example, he tells the students that he randomly assigns grades at the start of the course because he does not care who they are or what they do in this class: “None of you c**ksuckers are good enough to earn an A in my class. So I randomly assign grades before the first day of class.” However, he later explains how they can earn an A without coming to class if they do the other work.

He uses the pre-written speech (you can see the script when he shares the screen) to attack religion, Western Civilization, America’s legacy, and both the students and the university.

Mehler may set a record for the purely profane in his diatribe:

“I may have f***ed up my life flatter than hammered s***, but I stand before you today beholding to no human c*ksucker,” Mehler says. “I’m working in a paid f***ing union job and no limber-d*ck c*ksucker of an administrator is going to tell me how to teach my classes. Because I’m a f****** tenured professor. So, if you want to go complain to your dean, f*** you, go ahead. I’m retiring at the end of this year and I couldn’t give a flying f*** any longer.”

At one point he declares “[w]hen I look out at a classroom filled with fifty students, I see fifty selfish kids who don’t give a sh*t whether grandpa lives or dies. And if you won’t expose your grandpa to a possible infection with COVID, then stay the f*** away from me.”

It is Howard Beale with a doctorate.

So is this just madness? Perhaps, but I don’t think so. Three clues can be derived from the video. First, there is the fact that this was a pre-written “soliloquy.” It sounds like a spontaneous diatribe but It is a calculated and intentionally worded address. It could be more Machiavellian than Bealean in that sense. While Mehler does call his students “vectors of disease,” he then shows how he took that language loosely from a movie as a teachable moment on plagiarism.

Second, Mehler reveals that he does not want to teach in person. To that end, he encourages students not to come to class and assures them that their grades will not be impacted. Indeed, he strongly suggests that he will look with disfavor on those who appear in this class.

Third, Mehler says that this is his last year before retiring and he has tenure (and union) protections. He encourages the students to complain to the university. Indeed, he almost begs them to do so. They did and the university expressed the predictable shock as it placed him on leave.

So what does that all mean? It could mean that Mehler was trying to get himself put on leave. (Hopefully, he can still return the $300 space helmet). Before the university could fire him, they must investigate him and follow grievance procedures. He will claim that this was an effort of being edgy and humorous. That process could likely take the year and Mehler would simply retire. In the meantime, he and his space helmet can stay at home.

Or he may be crazy.

Wednesday, January 12, 2022

A statistical problem that makes estimates of climate sensitivity to greenhouse gases problematic (unsettled)

 Here is "An Introductory-Level Explanation of my Critique of AT99" by Ross McKitrick.

The message is that the techniques for estimating climate sensitivity to greenhouse gases relied upon by the climate alarmist crowd - including many "climate scientists" is flawed, hence the estimates cannot be taken at face value.

--------------------------------

1 INTRODUCTION
My article in Climate Dynamics shows that the AT99 method is theoretically flawed and gives unreliable results. A careful statement of the implications must note an elementary principle of logic. Remember that, according to logic, we can say “Suppose A implies B; then if A is true therefore B is true.” Example: all dogs have fur; a beagle is a dog; therefore a beagle has fur. But we cannot say “Suppose A implies B; A is not true therefore B is not true.” Example: all dogs have fur; a cat is not a dog, therefore a cat does not have fur. But we can say “Suppose A implies B; A is not true therefore we do not know if B is true.” Example: all dogs have fur; a dolphin is not a dog, therefore we do not know if a dolphin has fur.

In this example “A” is the statistical argument in AT99 which they invoked to prove “B”—the claim that their model yields unbiased and valid results. I showed that “A”, their statistical argument, is not true. So we have no basis to say that their model yields unbiased and valid results. In my article I go further and explain why there are reasons to believe the results will typically be invalid. I also list the conditions needed to prove their claims of validity. I don’t think it can be done, for reasons stated in the paper, but I leave open the possibility. Absent such proof, applications of their method over the past 20 years leave us uninformed about the influence of GHG’s on the climate. Here I will try to explain the main elements of the statistical argument.

2 REGRESSION
Most people are familiar with the idea of drawing a line of best fit through a scatter of data. This is called linear regression. Consider a sample of data showing, for example, wife’s age plotted against the husband’s age.

Missing chart showing scatter diagram showing pattern of dots with obvious lower left to upper right pattern - obvious positive correlation.

Clearly the two are correlated: older men have older wives and vice versa. You can easily picture drawing a straight line of best fit.

The formula for a straight line is 𝑌 = 𝑎 + 𝑏𝑋. Here, Y and X are the names of the variables. In the above example Y stands for wife’s age and X stands for husband’s age. a and b are the coefficients to be estimated. b is the slope coefficient. When you draw the line of best fit you are selecting numerical values for a and b. We may be interested in knowing whether b is positive, which implies that an increase in X is associated with an increase in Y. In the above example it clearly is: any reasonable line through the sample would slope upwards. But in other cases it is not so obvious. For example:

Missing chart showing scatter diagram showing pattern of dots without obvious positive or negative slope - unclear true correlation.

Here a line of best fit would be nearly horizontal, but might slope up. For the purpose of picturing why statistical theory becomes important for interpreting regression analysis it is better to have in mind the above graph rather than the earlier one. We rarely have data sets where the relationship is as obvious as it is in the husband-wife example. We are more often trying to get subtle patterns out of much noisier data.

It can be particularly difficult to tell if slope lines are positive if we are working in multiple dimensions: for instance if we are fitting a line 𝑌 = 𝑎 + 𝑏𝑋 + 𝑐𝑊 + 𝑑𝑍 through a data set that also has variables W and Z and their coefficients c and d to contend with. Regardless of the model we need some way of testing if the true value of b is definitely positive or not. That requires a bit more theory.

Note that regression models can establish correlation, but correlation is not causation. Older men do not cause their wives to be older; it is just that people who marry tend to be of the same age group. If we found deaths by drowning to be correlated with ice cream consumption, it would not prove that eating ice cream causes drowning. It is more likely that both occur in warm weather, so the onset of summer causes both events to rise at the same time. Regression models can help support interpretations of causality if there are other grounds for making such a connection, but it must be done very cautiously and only after rigorously checking testing whether the model has omitted important explanatory variables.

3 SAMPLING AND VARIANCE
The first example above is a plot of a sample of data. It is clearly not the entire collection of husbands and wives in the world. A sample is a subset of a population. When we do statistical analysis we have to take account of the fact that we are working with a sample rather than the entire population (in principle, the larger the sample, the more representative it is for the entire population). The line of best fit through the sample can only ever yield an estimate of the true value of b. In conventional notation we denote an estimate of b with a ‘hat’, writing it 𝑏̂. Because it is an estimate, we can only really talk about a range of possible values. Regression yields a distribution of possible estimates, some more likely than others. If you fit a line through data using a simple program like Excel it might only report the central slope estimate 𝑏̂ but what the underlying theory yields is a distribution of possible values.

Most people are familiar with the idea of a ‘bell curve’ which summarizes data, like the distribution of grades in a class, where many values are clustered around the mean and the number of observations diminishes as you go further away from the mean. The wideness of a distribution is summarized by a number called the variance. If the variance is low the distribution is narrow and if it is high the distribution is wide:

Missing chart showing narrow and wide bell curves.

Regression analysis yields an estimate both of 𝑏̂ and its variance 𝑣(𝑏̂). A closely related concept is the standard error of 𝑏̂ which is the square root of 𝑣(𝑏̂) and can be denoted with a Greek sigma: 𝜎̂. Statistical theory tells us that, as long as the regression model satisfies a certain set of conditions, there is a 95% probability that the true (population) value of b is inside an interval bounded approximately by 2𝜎̂ above and below 𝑏̂. This is called the 95% Confidence Interval.

Given a sample of data on (in this case) X and Y, we can use regression methods to fit a line 𝑌 = 𝑎̂ + 𝑏̂𝑋 and if we are confident 𝑏̂ is above zero it implies that an increase in X leads to an increase in Y. “Confident” here means that 𝑏̂ is more than 2𝜎̂ greater than zero. If it isn’t we say that the coefficient is positive but not statistically significant.

4 BIAS, EFFICIENCY AND CONSISTENCY
The value of 𝑏̂ is obtained using a formula that takes in the sample data and pops out a number. There are many formulas that can be used. The most popular one is called Ordinary Least Squares or OLS. It is derived by supposing that the straight line allows us to predict the value of Y that corresponds with each value of X, but there will be an error in each such prediction, and we should choose the values of 𝑎̂ and 𝑏̂ that minimize the sum of the squared errors. OLS also yields an estimate of the variances of each coefficient.

Expected value is a concept in statistics that refers to a probability-weighted average of a random variable. The expected value of a random variable 𝑔 is denoted 𝐸(𝑔). OLS yields a distribution for 𝑏̂, which means it has an expected value. Statistical theory can be used to show that, as long as the regression model satisfies a certain set of conditions, 𝐸(𝑏̂) = 𝑏. In other words, the expected value is the true value. In this case we say the estimator is unbiased. It is also the case that the variance estimate is unbiased (again as long as the regression model satisfies a certain set of conditions).

Since there are many possible estimation formulas besides OLS, we need to think about why we would prefer OLS to the others. One reason is that, among all the options that yield unbiased estimates, OLS yields the smallest variance.1 So it makes the best use of the available data and gives us the smallest 95% Confidence Interval. We call this efficiency.

Some formulas give us estimated slope coefficients or variances that are biased when the sample size is small, but as the sample size gets larger the bias disappears and the variance goes to zero, so the distribution collapses onto the true value. This is called consistency. An inconsistent estimator has the undesirable property that as we get more and more data we have no assurance that our coefficient estimates get closer to the truth, even if they look like they are getting more precise because the variance is shrinking but does not go to zero.

5 THE GAUSS-MARKOV CONDITIONS AND SPECIFICATION TESTING.
I have several times referred to “a certain set of conditions” that a regression model needs to satisfy in order for OLS to yield unbiased, efficient and consistent estimates. These conditions are listed in any introductory econometrics textbook and they are called the Gauss-Markov (GM) conditions. Much of the field of econometrics (which is a branch of statistics focused on using regression analysis to build economic models) is focused on testing for failures of the GM conditions and proposing remedies when failures are detected.

Some failures of the GM conditions imply that 𝑏̂ will still be unbiased, but its variance estimate is biased. So we might get a decent estimate of the slope coefficient but our judgment of whether it is
significant or not will be unreliable. Other failures of the GM conditions imply that both 𝑏̂ and 𝜎̂ are biased. In this case the analysis may be spurious and totally meaningless.

As an example of a bad research design, suppose we have data from hundreds of US cities over many years showing both the annual number of crimes in the city and the number of police officers on the streets, and we regress the annual number of crimes on the annual number of police officers to test if crime goes down when more police are deployed. There are several problems that would likely lead to multiple GM conditions failing. First, the sample consists of small and large cities together, so the range and dispersion of the data over the sample will vary, which can cause biased variance estimates. Second, there will be lag effects where a change in policing might lead to a change in crime only after a certain amount of time has passed, which can bias the coefficient and variance estimates. Third, while crime may depend on policing, policing levels may also depend on the amount of crime, so both variables are determined by each other: one is not clearly determined outside the model. This can severely bias the coefficients and lead to spurious conclusions (such as that more policing leads to higher crime levels). Finally, both crime and policing depend on factors not included in the model, and unless those outside factors are uncorrelated with the level of policing the coefficient and variance estimates will be biased.

It is therefore critical to test for failures of the GM conditions. There is a huge literature in econometrics on this topic, which is called specification testing. Students who learn regression analysis learn specification testing all the way along. If a regression model is used for economics research, the results would never be taken at face value without at least some elementary specification tests being reported.

There is a class of data transformations that can be used to remedy violations of some GM conditions, and when they are applied we then say we are using Generalized Least Squares or GLS. Having applied a GLS transformation doesn’t mean we can assume the GM conditions automatically hold, they still have to be tested. In some cases a GLS transformation is still not enough and other modifications to the model are needed to achieve unbiasedness and consistency.

6 THE AT99 METHOD
Various authors prior to AT99 had proposed comparing observed climate measures to analogues simulated in climate models with and without GHG’s (which are called “response patterns”) to try to 7 determine if including the effect of GHG’s significantly helps explain the observations, which would then support making an attribution of cause. They refer to their method as “fingerprinting” or “optimal fingerprinting.” Those authors had also argued that the analysis would need to be aided by rescaling the data according to local climatic variability: put more weight on areas where the climate is inherently more stable and less weight on areas where it is “noisier”. To do that required having an estimate of something called the “climate noise covariance matrix” or 𝐶𝑁 which measures the variability of the climate in each location and, for each pair of locations, how their climate conditions correlate with each other. Rather than using observed data to compute 𝐶𝑁 climatologists have long preferred to use climate models. While there were reasons for this choice, it created many problems (which I discuss in my paper). Once 𝐶𝑁 is obtained from a climate model, to compute the required regression weights one needs to do a bit of linear algebra: first compute the inverse of 𝐶𝑁 and then compute the matrix root of the inverse. This would yield a weighting matrix P that would help “extract” information more efficiently from the data set.

One problem the scientists ran into, however, is that climate models don’t have enough resolution to identify all the elements of the 𝐶𝑁 matrix independently. In mathematical terms we say it is “rank deficient”, and an implication is that the inverse of 𝐶𝑁 does not exist. So the scientists chose to use an approximation called a “pseudo-inverse” to compute the needed weights. This created further problems.

7 THE AT99 ERROR AT99 noted that applying a weighting scheme makes the fingerprinting model like a GLS regression. And, they argued, a GLS model satisfies the GM conditions. Therefore the results of this method will be unbiased and efficient. That slightly oversimplifies their argument, but not by much. And the main error is obvious. You can’t know if a model satisfies the GM conditions unless you test for specific violations. AT99 didn’t even state the GM conditions correctly, much less propose any tests for violations.

In fact they derailed the whole idea of specification testing by arguing that one only needs to test that the climate model noise covariance estimates are “reliable” (their term—which they did not define), and they proposed a test statistic which they called the “Residual Consistency” or RC test for that purpose. They didn’t offer any proof that the RC test does what they claimed it does. For example it has nothing to do with showing that the residuals are consistent estimates of the unknown error terms. In fact they didn’t even state precisely what it tests, they only said that if the formula they propose pops out a small number, the fingerprinting regression is valid. In my paper I explained that there can easily be cases where the RC test would yield a small number even in models that are known to be misspecified and unreliable.

And that, with only one slight modification, has been the method used by the climate science profession for 20 years. A large body of literature is based on this flawed methodology. No one noticed the errors in the AT99 discussion of the GM conditions, no one minded the absence of any derivation of the RC test, and none of the hundreds of applications of the AT99 method were subject to conventional specification testing. So we have no basis for accepting any claims that the results of the optimal fingerprinting literature are unbiased or consistent. In fact, as I argued in my paper, the AT99 method as set out in their paper automatically fails at least one GM condition and likely more. So the results have to be assumed to be unreliable.

The slight modification came in 2003 when Myles Allen and a different coauthor, Peter Stott, proposed shifting from GLS to another estimator called Total Least Squares or TLS.2 It still involves using an estimate of 𝐶𝑁 to rescale the data, but the slope coefficients are selected using a different formula. Their rationale for TLS was that the climate model-generated variables in the fingerprinting regression are themselves pretty ‘noisy’ and this can cause GLS to yield coefficient estimates that are biased downwards. This is true, but econometricians deal with this problem using a technique called Instrumental Variables or IV. We don’t use TLS (in fact almost no one outside of climatology uses it) because, among other things, if the regression model is misspecified, TLS over-corrects and imparts an upward bias to the results. It is also extremely inefficient compared to OLS. IV models can be shown to be consistent and unbiased. TLS models can’t, unless the researcher makes some restrictive assumptions about the variances in the data set which themselves can’t be tested; in other words, unless the modeler “assumes the problem away.”

8 IMPLICATIONS AND NEXT STEPS
The AT99 method fails the GM conditions. As a result, its usage (including the TLS variant) yields results which might by chance be right, but in general will be biased and inconsistent and therefore cannot be assumed to be reliable. Nothing in the method itself (including use of the RC test) allows scientists to claim more than that.

The AT99 framework has another important limitation which renders it unsuitable for testing the hypothesis that greenhouse gases cause changes in the climate. The method depends on the assumption that the model which generates the 𝐶𝑁 matrix and the response patterns is a true representation of the climate system. Such data cannot be the basis of a test that contradicts the assumed structure of the climate model. The reason has to do with how hypotheses are tested. Going back to the earlier example of estimating 𝑏̂ and its distribution, statistical theory allows us to construct a test score (which I’ll call t) using the data and the output of the regression analysis which will have a known distribution if the true value of b is zero. If the computed value of t lies way out in the tails of such a distribution then it is likely not consistent with the hypothesis that 𝑏 = 0. In other words, hypothesis testing says “If the true value of b is zero, then the statistic t will be close to the middle of its distribution. If it is not close to the middle, b is probably not zero.”

For this to work requires us to be able to derive the distribution of the test statistic under the hypothesis that the true value of b is zero. In the fingerprinting regression framework suppose b represents the measure of the influence of GHG’s on the climate. The optimal fingerprinting method obliges us to use data generated by climate models to estimate both b and its variance. But climate models are built under the assumption that GHG’s have a large positive effect on the climate, or 𝑏 > 0. So we can’t use that data to estimate the distribution of a statistic under the assumption that 𝑏 = 0. Such a test will be spurious and unreliable.

Tuesday, January 11, 2022

Spreading Omicron May Be Safer

Here is a Wall Street Journal Opinion article by Vivek Ramaswamy and Apoorva Ramaswamy titled "
Slow the Spread? Speeding It May Be Safer".

I suspect these guys are right.  If so, it provides another example of the incompetence, or worse, of Government and a whole bunch of other organizations and people.
-----------------------------------------
The Omicron variant is spreading across the globe, but so far the strain appears to be less deadly than its predecessors. That’s good news, but here’s a risk that policy makers in every country should appreciate: Policies designed to slow the spread of Omicron may end up creating a supervariant that is more infectious, more virulent and more resistant to vaccines. That would be a man-made disaster.
To minimize that risk, policy makers must tolerate the rapid spread of milder variants. This will require difficult trade-offs, but it will save lives in the long run. We should end mask mandates and social distancing in most settings not because they don’t slow the spread—the usual argument against such measures—but because they probably do.

To understand why, first consider an important scientific distinction, between antigenic drift and antigenic shift. Antigens are molecules—such as the spike protein of SARS-CoV-2—that an immune system detects as foreign. The host immune system then mounts a response.

“Antigenic drift” describes the process by which single-point mutations (small genetic errors) randomly occur during the viral replication process. The result is minor alterations to antigens such as the spike protein. If a point mutation makes the virus less likely to survive, that variant gradually dies off. But if the mutation confers an incremental survival advantage—say, the ability to spread more quickly from one cell to another—then that strain becomes more likely to spread through the population.

Antigenic drift is a gradual, varying process: A single-point mutation alters one peptide, or building block, of a larger protein. Hosts with immunity against a prior strain generally enjoy at least partial immunity against “drifted” variants. This is called “cross-protection.”

Each time an immune host is exposed to a slightly different antigenic variant, the host can tweak its immune response without becoming severely ill. And the more similar the new strain is to the last version the person fought off, the less risky that strain will be to the host.

By contrast, “antigenic shift” refers to a discontinuous quantum leap from one antigen (or set of antigens) to a very different antigen (or set of antigens). New viral strains—such as those that jump from one species to another—tend to emerge from antigenic shift. The biological causes of antigenic shift are often different from those of antigenic drift. For example, the physical swap of whole sections of the genome leads to more significant changes to viral genes than those caused by individual point mutations.

But there’s a sorites paradox: How many unique point mutations collectively constitute an antigenic shift, especially when human hosts are deprived of opportunities to update their immune response to “drifted” variants?

Vaccinated and naturally immune people can revamp their immune response to new viral strains created by antigenic drift. Yet social distancing and masking increase the risk of vaccine-resistant strains from antigenic shift by minimizing opportunities for the vaccinated and naturally immune to tailor their immune responses through periodic exposures to incrementally “drifted” variants.

This is a familiar notion in virology. Take the rise of severe shingles cases over the past decade, partly a result of the widespread use of the chickenpox vaccine. Shingles and chickenpox are caused by the same virus. Before widespread use of the chickenpox vaccine, parents regularly updated their own immunity by getting exposed to chickenpox from their children, or from other adults who were exposed by children. But now that most children are vaccinated against chickenpox and don’t contract it, older adults suffer from more severe cases of shingles.

The absolute risk of a more virulent strain of SARS-CoV-2 is low. That’s because viruses “care” more about propagating themselves than about killing the host: Most viruses evolve to become more infectious and less virulent. But this is only a rule of thumb, not a biological law. Like any trend, we should expect a distribution of outcomes around the modal one—and the more iterations you allow, the more likely you are to get an unlikely outcome. Enforcing social-distancing policies amid widespread vaccination makes the emergence of a vaccine-resistant superstrain more likely.

Why not prepare for this outcome simply by developing new vaccines against novel strains more quickly? Because even mRNA vaccines can’t be developed fast enough to outrun a vaccine-resistant supervariant. On Dec. 8, Pfizer committed to delivering its first batch of new vaccines that cover the Omicron variant within 100 days. Yet by mid-March, a significant percentage of the U.S. population will have already been infected with Omicron.

Meanwhile, mask mandates and social-distancing measures will have created fertile ground for new variants that evade vaccination even more effectively. Significant antigenic shifts may create new strains that are increasingly difficult to target with vaccines at all. There are no vaccines for many viruses, despite decades of effort to develop them.

Will relaxing restrictions come at the cost of more hospitalizations and deaths as the next variant starts to spread? Perhaps, but it would reduce the risk of a worst-case scenario and greater loss of life in the long run.

The most important step in fighting the Covid-19 pandemic was the distribution of vaccines. With this milestone now achieved, the global response should shift from preventing the spread to minimizing the probability of an antigenic shift. Whether SARS-CoV-2 was made in a lab is the subject of debate, but let’s make sure we don’t manufacture an even more dangerous strain of the virus with misguided policies.

Monday, January 10, 2022

The 60-Year-Old Scientific Screwup That Helped Covid Kill

 Here is a great article by Megan Molteni at wired.com.

Another case of experts being both wrong and obstinate - both in and out of Government.

---------------------------------------

EARLY ONE MORNING, Linsey Marr tiptoed to her dining room table, slipped on a headset, and fired up Zoom. On her computer screen, dozens of familiar faces began to appear. She also saw a few people she didn’t know, including Maria Van Kerkhove, the World Health Organization’s technical lead for Covid-19, and other expert advisers to the WHO. It was just past 1 pm Geneva time on April 3, 2020, but in Blacksburg, Virginia, where Marr lives with her husband and two children, dawn was just beginning to break.

Marr is an aerosol scientist at Virginia Tech and one of the few in the world who also studies infectious diseases. To her, the new coronavirus looked as if it could hang in the air, infecting anyone who breathed in enough of it. For people indoors, that posed a considerable risk. But the WHO didn’t seem to have caught on. Just days before, the organization had tweeted “FACT: #COVID19 is NOT airborne.” That’s why Marr was skipping her usual morning workout to join 35 other aerosol scientists. They were trying to warn the WHO it was making a big mistake.

Over Zoom, they laid out the case. They ticked through a growing list of superspreading events in restaurants, call centers, cruise ships, and a choir rehearsal, instances where people got sick even when they were across the room from a contagious person. The incidents contradicted the WHO’s main safety guidelines of keeping 3 to 6 feet of distance between people and frequent handwashing. If SARS-CoV-2 traveled only in large droplets that immediately fell to the ground, as the WHO was saying, then wouldn’t the distancing and the handwashing have prevented such outbreaks? Infectious air was the more likely culprit, they argued. But the WHO’s experts appeared to be unmoved. If they were going to call Covid-19 airborne, they wanted more direct evidence—proof, which could take months to gather, that the virus was abundant in the air. Meanwhile, thousands of people were falling ill every day.

On the video call, tensions rose. At one point, Lidia Morawska, a revered atmospheric physicist who had arranged the meeting, tried to explain how far infectious particles of different sizes could potentially travel. One of the WHO experts abruptly cut her off, telling her she was wrong, Marr recalls. His rudeness shocked her. “You just don’t argue with Lidia about physics,” she says.

Morawska had spent more than two decades advising a different branch of the WHO on the impacts of air pollution. When it came to flecks of soot and ash belched out by smokestacks and tailpipes, the organization readily accepted the physics she was describing—that particles of many sizes can hang aloft, travel far, and be inhaled. Now, though, the WHO’s advisers seemed to be saying those same laws didn’t apply to virus-laced respiratory particles. To them, the word airborne only applied to particles smaller than 5 microns. Trapped in their group-specific jargon, the two camps on Zoom literally couldn’t understand one another.

When the call ended, Marr sat back heavily, feeling an old frustration coiling tighter in her body. She itched to go for a run, to pound it out footfall by footfall into the pavement. “It felt like they had already made up their minds and they were just entertaining us,” she recalls. Marr was no stranger to being ignored by members of the medical establishment. Often seen as an epistemic trespasser, she was used to persevering through skepticism and outright rejection. This time, however, so much more than her ego was at stake. The beginning of a global pandemic was a terrible time to get into a fight over words. But she had an inkling that the verbal sparring was a symptom of a bigger problem—that outdated science was underpinning public health policy. She had to get through to them. But first, she had to crack the mystery of why their communication was failing so badly.

MARR SPENT THE first many years of her career studying air pollution, just as Morawska had. But her priorities began to change in the late 2000s, when Marr sent her oldest child off to day care. That winter, she noticed how waves of runny noses, chest colds, and flu swept through the classrooms, despite the staff’s rigorous disinfection routines. “Could these common infections actually be in the air?” she wondered. Marr picked up a few introductory medical textbooks to satisfy her curiosity.

According to the medical canon, nearly all respiratory infections transmit through coughs or sneezes: Whenever a sick person hacks, bacteria and viruses spray out like bullets from a gun, quickly falling and sticking to any surface within a blast radius of 3 to 6 feet. If these droplets alight on a nose or mouth (or on a hand that then touches the face), they can cause an infection. Only a few diseases were thought to break this droplet rule. Measles and tuberculosis transmit a different way; they’re described as “airborne.” Those pathogens travel inside aerosols, microscopic particles that can stay suspended for hours and travel longer distances. They can spread when contagious people simply breathe.

The distinction between droplet and airborne transmission has enormous consequences. To combat droplets, a leading precaution is to wash hands frequently with soap and water. To fight infectious aerosols, the air itself is the enemy. In hospitals, that means expensive isolation wards and N95 masks for all medical staff.

The books Marr flipped through drew the line between droplets and aerosols at 5 microns. A micron is a unit of measurement equal to one-millionth of a meter. By this definition, any infectious particle smaller than 5 microns in diameter is an aerosol; anything bigger is a droplet. The more she looked, the more she found that number. The WHO and the US Centers for Disease Control and Prevention also listed 5 microns as the fulcrum on which the droplet-aerosol dichotomy toggled.

There was just one literally tiny problem: “The physics of it is all wrong,” Marr says. That much seemed obvious to her from everything she knew about how things move through air. Reality is far messier, with particles much larger than 5 microns staying afloat and behaving like aerosols, depending on heat, humidity, and airspeed. “I’d see the wrong number over and over again, and I just found that disturbing,” she says. The error meant that the medical community had a distorted picture of how people might get sick.

Epidemiologists have long observed that most respiratory bugs require close contact to spread. Yet in that small space, a lot can happen. A sick person might cough droplets onto your face, emit small aerosols that you inhale, or shake your hand, which you then use to rub your nose. Any one of those mechanisms might transmit the virus. “Technically, it’s very hard to separate them and see which one is causing the infection,” Marr says. For long-distance infections, only the smallest particles could be to blame. Up close, though, particles of all sizes were in play. Yet, for decades, droplets were seen as the main culprit.

Marr decided to collect some data of her own. Installing air samplers in places such as day cares and airplanes, she frequently found the flu virus where the textbooks said it shouldn’t be—hiding in the air, most often in particles small enough to stay aloft for hours. And there was enough of it to make people sick.

In 2011, this should have been major news. Instead, the major medical journals rejected her manuscript. Even as she ran new experiments that added evidence to the idea that influenza was infecting people via aerosols, only one niche publisher, The Journal of the Royal Society Interface, was consistently receptive to her work. In the siloed world of academia, aerosols had always been the domain of engineers and physicists, and pathogens purely a medical concern; Marr was one of the rare people who tried to straddle the divide. “I was definitely fringe,” she says.

Thinking it might help her overcome this resistance, she’d try from time to time to figure out where the flawed 5-micron figure had come from. But she always got stuck. The medical textbooks simply stated it as fact, without a citation, as if it were pulled from the air itself. Eventually she got tired of trying, her research and life moved on, and the 5-micron mystery faded into the background. Until, that is, December 2019, when a paper crossed her desk from the lab of Yuguo Li.

An indoor-air researcher at the University of Hong Kong, Li had made a name for himself during the first SARS outbreak, in 2003. His investigation of an outbreak at the Amoy Gardens apartment complex provided the strongest evidence that a coronavirus could be airborne. But in the intervening decades, he’d also struggled to convince the public health community that their risk calculus was off. Eventually, he decided to work out the math. Li’s elegant simulations showed that when a person coughed or sneezed, the heavy droplets were too few and the targets—an open mouth, nostrils, eyes—too small to account for much infection. Li’s team had concluded, therefore, that the public health establishment had it backward and that most colds, flu, and other respiratory illnesses must spread through aerosols instead.

Their findings, they argued, exposed the fallacy of the 5-micron boundary. And they’d gone a step further, tracing the number back to a decades-old document the CDC had published for hospitals. Marr couldn’t help but feel a surge of excitement. A journal had asked her to review Li’s paper, and she didn’t mask her feelings as she sketched out her reply. On January 22, 2020, she wrote, “This work is hugely important in challenging the existing dogma about how infectious disease is transmitted in droplets and aerosols.”

Even as she composed her note, the implications of Li’s work were far from theoretical. Hours later, Chinese government officials cut off any travel in and out of the city of Wuhan, in a desperate attempt to contain an as-yet-unnamed respiratory disease burning through the 11-million-person megalopolis. As the pandemic shut down country after country, the WHO and the CDC told people to wash their hands, scrub surfaces, and maintain social distance. They didn’t say anything about masks or the dangers of being indoors.

A FEW DAYS after the April Zoom meeting with the WHO, Marr got an email from another aerosol scientist who had been on the call, an atmospheric chemist at the University of Colorado Boulder named Jose-Luis Jimenez. He’d become fixated on the WHO recommendation that people stay 3 to 6 feet apart from one another. As far as he could tell, that social distancing guideline seemed to be based on a few studies from the 1930s and ’40s. But the authors of those experiments actually argued for the possibility of airborne transmission, which by definition would involve distances over 6 feet. None of it seemed to add up.

Marr told him about her concerns with the 5-micron boundary and suggested that their two issues might be linked. If the 6-foot guideline was built off of an incorrect definition of droplets, the 5-micron error wasn’t just some arcane detail. It seemed to sit at the heart of the WHO’s and the CDC’s flawed guidance. Finding its origin suddenly became a priority. But to hunt it down, Marr, Jimenez, and their collaborators needed help. They needed a historian.

Luckily, Marr knew one, a Virginia Tech scholar named Tom Ewing who specialized in the history of tuberculosis and influenza. They talked. He suggested they bring on board a graduate student he happened to know who was good at this particular form of forensics. The team agreed. “This will be very interesting,” Marr wrote in an email to Jimenez on April 13. “I think we’re going to find a house of cards.”

The graduate student in question was Katie Randall. Covid had just dealt her dissertation a big blow—she could no longer conduct in-person research, so she’d promised her adviser she would devote the spring to sorting out her dissertation and nothing else. But then an email from Ewing arrived in her inbox describing Marr’s quest and the clues her team had so far unearthed, which were “layered like an archaeology site, with shards that might make up a pot,” he wrote. That did it. She was in.

Randall had studied citation tracking, a type of scholastic detective work where the clues aren’t blood sprays and stray fibers but buried references to long-ago studies, reports, and other records. She started digging where Li and the others had left off—with various WHO and CDC papers. But she didn’t find any more clues than they had. Dead end.

She tried another tack. Everyone agreed that tuberculosis was airborne. So she plugged “5 microns” and “tuberculosis” into a search of the CDC’s archives. She scrolled and scrolled until she reached the earliest document on tuberculosis prevention that mentioned aerosol size. It cited an out-of-print book written by a Harvard engineer named William Firth Wells. Published in 1955, it was called Airborne Contagion and Air Hygiene. A lead!

In the Before Times, she would have acquired the book through interlibrary loan. With the pandemic shutting down universities, that was no longer an option. On the wilds of the open internet, Randall tracked down a first edition from a rare book seller for $500—a hefty expense for a side project with essentially no funding. But then one of the university’s librarians came through and located a digital copy in Michigan. Randall began to dig in.

In the words of Wells’ manuscript, she found a man at the end of his career, rushing to contextualize more than 23 years of research. She started reading his early work, including one of the studies Jimenez had mentioned. In 1934, Wells and his wife, Mildred Weeks Wells, a physician, analyzed air samples and plotted a curve showing how the opposing forces of gravity and evaporation acted on respiratory particles. The couple’s calculations made it possible to predict the time it would take a particle of a given size to travel from someone’s mouth to the ground. According to them, particles bigger than 100 microns sank within seconds. Smaller particles stayed in the air. Randall paused at the curve they’d drawn. To her, it seemed to foreshadow the idea of a droplet-aerosol dichotomy, but one that should have pivoted around 100 microns, not 5.

The book was long, more than 400 pages, and Randall was still on the hook for her dissertation. She was also helping her restless 6-year-old daughter navigate remote kindergarten, now that Covid had closed her school. So it was often not until late at night, after everyone had gone to bed, that she could return to it, taking detailed notes about each day’s progress.

One night she read about experiments Wells did in the 1940s in which he installed air-disinfecting ultraviolet lights inside schools. In the classrooms with UV lamps installed, fewer kids came down with the measles. He concluded that the measles virus must have been in the air. Randall was struck by this. She knew that measles didn’t get recognized as an airborne disease until decades later. What had happened?

Part of medical rhetoric is understanding why certain ideas take hold and others don’t. So as spring turned to summer, Randall started to investigate how Wells’ contemporaries perceived him. That’s how she found the writings of Alexander Langmuir, the influential chief epidemiologist of the newly established CDC. Like his peers, Langmuir had been brought up in the Gospel of Personal Cleanliness, an obsession that made handwashing the bedrock of US public health policy. He seemed to view Wells’ ideas about airborne transmission as retrograde, seeing in them a slide back toward an ancient, irrational terror of bad air—the “miasma theory” that had prevailed for centuries. Langmuir dismissed them as little more than “interesting theoretical points.”

But at the same time, Langmuir was growing increasingly preoccupied by the threat of biological warfare. He worried about enemies carpeting US cities in airborne pathogens. In March 1951, just months after the start of the Korean War, Langmuir published a report in which he simultaneously disparaged Wells’ belief in airborne infection and credited his work as being foundational to understanding the physics of airborne infection.

How curious, Randall thought. She kept reading.

In the report, Langmuir cited a few studies from the 1940s looking at the health hazards of working in mines and factories, which showed the mucus of the nose and throat to be exceptionally good at filtering out particles bigger than 5 microns. The smaller ones, however, could slip deep into the lungs and cause irreversible damage. If someone wanted to turn a rare and nasty pathogen into a potent agent of mass infection, Langmuir wrote, the thing to do would be to formulate it into a liquid that could be aerosolized into particles smaller than 5 microns, small enough to bypass the body’s main defenses. Curious indeed. Randall made a note.

When she returned to Wells’ book a few days later, she noticed he too had written about those industrial hygiene studies. They had inspired Wells to investigate what role particle size played in the likelihood of natural respiratory infections. He designed a study using tuberculosis-causing bacteria. The bug was hardy and could be aerosolized, and if it landed in the lungs, it grew into a small lesion. He exposed rabbits to similar doses of the bacteria, pumped into their chambers either as a fine (smaller than 5 microns) or coarse (bigger than 5 microns) mist. The animals that got the fine treatment fell ill, and upon autopsy it was clear their lungs bulged with lesions. The bunnies that received the coarse blast appeared no worse for the wear.

For days, Randall worked like this—going back and forth between Wells and Langmuir, moving forward and backward in time. As she got into Langmuir’s later writings, she observed a shift in his tone. In articles he wrote up until the 1980s, toward the end of his career, he admitted he had been wrong about airborne infection. It was possible.

A big part of what changed Langmuir’s mind was one of Wells’ final studies. Working at a VA hospital in Baltimore, Wells and his collaborators had pumped exhaust air from a tuberculosis ward into the cages of about 150 guinea pigs on the building’s top floor. Month after month, a few guinea pigs came down with tuberculosis. Still, public health authorities were skeptical. They complained that the experiment lacked controls. So Wells’ team added another 150 animals, but this time they included UV lights to kill any germs in the air. Those guinea pigs stayed healthy. That was it, the first incontrovertible evidence that a human disease—tuberculosis—could be airborne, and not even the public health big hats could ignore it.

The groundbreaking results were published in 1962. Wells died in September of the following year. A month later, Langmuir mentioned the late engineer in a speech to public health workers. It was Wells, he said, that they had to thank for illuminating their inadequate response to a growing epidemic of tuberculosis. He emphasized that the problematic particles—the ones they had to worry about—were smaller than 5 microns.

Inside Randall’s head, something snapped into place. She shot forward in time, to that first tuberculosis guidance document where she had started her investigation. She had learned from it that tuberculosis is a curious critter; it can only invade a subset of human cells in the deepest reaches of the lungs. Most bugs are more promiscuous. They can embed in particles of any size and infect cells all along the respiratory tract.


What must have happened, she thought, was that after Wells died, scientists inside the CDC conflated his observations. They plucked the size of the particle that transmits tuberculosis out of context, making 5 microns stand in for a general definition of airborne spread. Wells’ 100-micron threshold got left behind. “You can see that the idea of what is respirable, what stays airborne, and what is infectious are all being flattened into this 5-micron phenomenon,” Randall says. Over time, through blind repetition, the error sank deeper into the medical canon. The CDC did not respond to multiple requests for comment.

In June, she Zoomed into a meeting with the rest of the team to share what she had found. Marr almost couldn’t believe someone had cracked it. “It was like, ‘Oh my gosh, this is where the 5 microns came from?!’” After all these years, she finally had an answer. But getting to the bottom of the 5-micron myth was only the first step. Dislodging it from decades of public health doctrine would mean convincing two of the world’s most powerful health authorities not only that they were wrong but that the error was incredibly—and urgently—consequential.

WHILE RANDALL WAS digging through the past, her collaborators were planning a campaign. In July, Marr and Jimenez went public, signing their names to an open letter addressed to public health authorities, including the WHO. Along with 237 other scientists and physicians, they warned that without stronger recommendations for masking and ventilation, airborne spread of SARS-CoV-2 would undermine even the most vigorous testing, tracing, and social distancing efforts.

The news made headlines. And it provoked a strong backlash. Prominent public health personalities rushed to defend the WHO. Twitter fights ensued. Saskia Popescu, an infection-prevention epidemiologist who is now a biodefense professor at George Mason University, was willing to buy the idea that people were getting Covid by breathing in aerosols, but only at close range. That’s not airborne in the way public health people use the word. “It’s a very weighted term that changes how we approach things,” she says. “It’s not something you can toss around haphazardly.”

Days later, the WHO released an updated scientific brief, acknowledging that aerosols couldn’t be ruled out, especially in poorly ventilated places. But it stuck to the 3- to 6-foot rule, advising people to wear masks indoors only if they couldn’t keep that distance. Jimenez was incensed. “It is misinformation, and it is making it difficult for ppl to protect themselves,” he tweeted about the update. “E.g. 50+ reports of schools, offices forbidding portable HEPA units because of @CDCgov and @WHO downplaying aerosols.”

While Jimenez and others sparred on social media, Marr worked behind the scenes to raise awareness of the misunderstandings around aerosols. She started talking to Kimberly Prather, an atmospheric chemist at UC San Diego, who had the ear of prominent public health leaders within the CDC and on the White House Covid Task Force. In July, the two women sent slides to Anthony Fauci, director of the National Institutes of Allergy and Infectious Diseases. One of them showed the trajectory of a 5-micron particle released from the height of the average person’s mouth. It went farther than 6 feet—hundreds of feet farther. A few weeks later, speaking to an audience at Harvard Medical School, Fauci admitted that the 5-micron distinction was wrong—and had been for years. “Bottom line is, there is much more aerosol than we thought,” he said. (Fauci declined to be interviewed for this story.)

Still, the droplet dogma reigned. In early October, Marr and a group of scientists and doctors published a letter in Science urging everyone to get on the same page about how infectious particles move, starting with ditching the 5-micron threshold. Only then could they provide clear and effective advice to the public. That same day, the CDC updated its guidance to acknowledge that SARS-CoV-2 can spread through long-lingering aerosols. But it didn’t emphasize them.

That winter, the WHO also began to talk more publicly about aerosols. On December 1, the organization finally recommended that everyone always wear a mask indoors wherever Covid-19 is spreading. In an interview, the WHO’s Maria Van Kerkhove said that the change reflects the organization’s commitment to evolving its guidance when the scientific evidence compels a change. She maintains that the WHO has paid attention to airborne transmission from the beginning—first in hospitals, then at places such as bars and restaurants. “The reason we’re promoting ventilation is that this virus can be airborne,” Van Kerkhove says. But because that term has a specific meaning in the medical community, she admits to avoiding it—and emphasizing instead the types of settings that pose the biggest risks. Does she think that decision has harmed the public health response, or cost lives? No, she says. “People know what they need to do to protect themselves.”

Yet she admits it may be time to rethink the old droplet-airborne dichotomy. According to Van Kerkhove, the WHO plans to formally review its definitions for describing disease transmission in 2021.

For Yuguo Li, whose work had so inspired Marr, these moves have given him a sliver of hope. “Tragedy always teaches us something,” he says. The lesson he thinks people are finally starting to learn is that airborne transmission is both more complicated and less scary than once believed. SARS-CoV-2, like many respiratory diseases, is airborne, but not wildly so. It isn’t like measles, which is so contagious it infects 90 percent of susceptible people exposed to someone with the virus. And the evidence hasn’t shown that the coronavirus often infects people over long distances. Or in well-ventilated spaces. The virus spreads most effectively in the immediate vicinity of a contagious person, which is to say that most of the time it looks an awful lot like a textbook droplet-based pathogen.

For most respiratory diseases, not knowing which route caused an infection has not been catastrophic. But the cost has not been zero. Influenza infects millions each year, killing between 300,000 and 650,000 globally. And epidemiologists are predicting the next few years will bring particularly deadly flu seasons. Li hopes that acknowledging this history—and how it hindered an effective global response to Covid-19—will allow good ventilation to emerge as a central pillar of public health policy, a development that would not just hasten the end of this pandemic but beat back future ones.

To get a glimpse into that future, you need only peek into the classrooms where Li teaches or the Crossfit gym where Marr jumps boxes and slams medicine balls. In the earliest days of the pandemic, Li convinced the administrators at the University of Hong Kong to spend most of its Covid-19 budget on upgrading the ventilation in buildings and buses rather than on things such as mass Covid testing of students. Marr reviewed blueprints and HVAC schematics with the owner of her gym, calculating the ventilation rates and consulting on a redesign that moved workout stations outside and near doors that were kept permanently open. To date, no one has caught Covid at the gym. Li’s university, a school of 30,000 students, has recorded a total of 23 Covid-19 cases. Of course Marr’s gym is small, and the university benefited from the fact that Asian countries, scarred by the 2003 SARS epidemic, were quick to recognize aerosol transmission. But Marr's and Li’s swift actions could well have improved their odds. Ultimately, that’s what public health guidelines do: They tilt people and places closer to safety.

ON FRIDAY, APRIL 30, the WHO quietly updated a page on its website. In a section on how the coronavirus gets transmitted, the text now states that the virus can spread via aerosols as well as larger droplets. As Zeynep Tufekci noted in The New York Times, perhaps the biggest news of the pandemic passed with no news conference, no big declaration. If you weren’t paying attention, it was easy to miss.

But Marr was paying attention. She couldn’t help but note the timing. She, Li, and two other aerosol scientists had just published an editorial in The BMJ, a top medical journal, entitled “Covid-19 Has Redefined Airborne Transmission.” For once, she hadn’t had to beg; the journal’s editors came to her. And her team had finally posted their paper on the origins of the 5-micron error to a public preprint server.

In early May, the CDC made similar changes to its Covid-19 guidance, now placing the inhalation of aerosols at the top of its list of how the disease spreads. Again though, no news conference, no press release. But Marr, of course, noticed. That evening, she got in her car to pick up her daughter from gymnastics. She was alone with her thoughts for the first time all day. As she waited at a red light, she suddenly burst into tears. Not sobbing, but unable to stop the hot stream of tears pouring down her face. Tears of exhaustion, and relief, but also triumph. Finally, she thought, they’re getting it right, because of what we’ve done.

The light turned. She wiped the tears away. Someday it would all sink in, but not today. Now, there were kids to pick up and dinner to eat. Something approaching normal life awaited.