Wednesday, December 20, 2023

Tuesday, December 19, 2023

Another nail in the Climate Alarmists’ coffin?

 Ross McKitrick at Judith Curry's blog.

------------------------------------

Climate attribution method overstates “fingerprints” of external forcing

I have a new paper in the peer-reviewed journal Environmetrics discussing biases in the “optimal fingerprinting” method which climate scientists use to attribute climatic changes to greenhouse gas emissions. This is the third in my series of papers on flaws in standard fingerprinting methods: blog posts on the first two are here and here.

Climatologists use a statistical technique called Total Least Squares (TLS), also called orthogonal regression, in their fingerprinting models to fix a problem in ordinary regression methods that can lead to the influence of external forcings being understated. My new paper argues that in typical fingerprinting settings TLS overcorrects and imparts large upward biases, thus overstating the impact of GHG forcing.

While the topic touches on climatology, for the most part the details involve regression methods which is what empirical economists like me are trained to do. I teach regression in my econometrics courses and I have studied and used it all my career. I mention this because if anyone objects that I’m not a “climate scientist” my response is: you’re right, I’m an economist which is why I’m qualified to talk about this.

I have previously shown that when the optimal fingerprinting regression is misspecified by leaving out explanatory variables that should be in it, TLS is biased upwards (other authors have also proven this theoretically). In that study I noted that when anthropogenic and natural forcings (ANTH and NAT) are negatively correlated the positive TLS bias increases. My new paper focuses just on this issue since, in practice, climate model-generated ANTH and NAT forcing series are negatively correlated. I show that in this case, even if no explanatory variables have been omitted from the regression, TLS estimates of forcing coefficients are usually too large. Among other things, since TLS-estimated coefficients are plugged into carbon budget models, this will result in a carbon budget being biased too small.

Background

In 1999 climatologists Myles Allen and Simon Tett published a paper in Climate Dynamics in which they proposed a Generalized Least Squares or GLS regression model for detecting the effects of forcings on climate. The IPCC immediately embraced the Allen&Tett method and in the 2001 3rd Assessment Report hailed it as the way to show a causal link between greenhouse forcing and observed climate change. It’s been relied upon ever since by the “fingerprinting” community and the IPCC. In 2021 I published a Comment in Climate Dynamics showing that the Allen & Tett method has theoretical flaws and that the arguments supporting its claim to be a valid method were false. I provided a non-technical explainer through the Global Warming Policy Foundation website. Myles Allen made a brief reply, to which I responded and then economist Richard Tol provided further comments. The exchange is at the GWPF website. My comment was published by Climate Dynamics in summer 2021, has been accessed over 21,000 times and its Altmetric score remains in the top 1% of all scientific articles published since that date. Two and a half years later Allen and Tett have yet to submit a reply.

Note: I just saw that a paper by Chinese statisticians Hanyue Chen et al. partially responding to my critique was published by Climate Dynamics. This is weird. In fall 2021 Chen et al submitted the paper to Climate Dynamics and I was asked to provide one of the referee reports, which I did. The paper was rejected. Now it’s been published even though the handling editor confirmed it was rejected. I’ve queried Climate Dynamics to find out what’s going on and they are investigating.

One of the arguments against my critique was that the Allen and Tett paper had been superseded by Allen and Stott 2001. While that paper incorporated the same incorrect theory from Allen and Tett 1999, its refinement was to replace the GLS regression step with TLS as a solution to the problem that the climate model-generated ANTH and NAT “signals” are noisy estimates of the unobservable true signals. In a regression model if your explanatory variables have random errors in them, GLS yields coefficient estimates that tend to be biased low.

This problem is well-known in econometrics. Long before Allen and Stott 2001, econometricians had shown that a method called Instrumental Variables (IV) could remedy it and yield unbiased and consistent coefficient estimates. Allen and Stott didn’t mention IV; instead they proposed TLS and the entire climatology field simply followed their lead. But does TLS solve the problem?

No one has been able to prove that it does except under very restrictive assumptions and you can’t be sure if they hold or not. If they don’t hold, then TLS generates unreliable results, which is why researchers in other fields don’t like it. The problem is that TLS requires more information than the data set contains. This requires the researcher to make arbitrary assumptions to reduce the number of parameters needing to be estimated. The most common assumption is that the error variances are the same on the dependent and explanatory variables alike.

The typical application involves regressing a dependent “Y” variable on a bunch of explanatory “X” variables, and in the errors-in-variables case we assume the latter are unavailable. Instead we observe “W’s” which are noisy approximations to the X’s. Suppose we assume the variances of the errors on the X’s are all the same and equal S times the variance of the errors on the Y variable. If this turns out to be true, so S=1, and we happen to assume S=1, TLS can in some circumstances yield unbiased coefficients. But in general we don’t know if S=1, and if it doesn’t, TLS can go completely astray.

In the limited literature discussing properties of TLS estimators it is usually assumed that the explanatory variables are uncorrelated. As part of my work on the fingerprinting method I obtained a set of model-generated climate signals from CMIP5 models and I noticed that the ANTH and NAT signals are always negatively correlated (the average correlation coefficient is -0.6). I also noticed that the signals don’t have the same variances (which is a separate issue from the error terms not having the same variances).

The experiment

In my new paper I set up an artificial fingerprinting experiment in which I know the correct answer in advance and I can vary several parameters which affect the outcome: the error variance ratio S; the correlation between the W’s; and the relative variances of the X’s. I ran repeated experiments based in turn on the assumption that the true value of beta (the coefficient connecting GHG’s to observed climate change) is 0 or 1. Then I measured the biases that arise when using TLS and GLS (GLS in this case is equivalent to OLS, or ordinary least squares).

These graphs show the coefficient biases using OLS when the experiment is run on simulated X’s with average relative variances (see the paper for versions where the relative variances are lower or higher).



The left panel is the case when the true value of beta = 0 (which implies no influence of GHGs on climate) and the right is the case when true beta=1 (which implies the GHG influence is “detected” and the climate models are consistent with observations). The lines aren’t the same length because not all parameter combinations are theoretically possible. The horizontal axis measures the correlation between the observed signals, which in the data I’ve seen is always less than -0.2. The vertical axis measures the bias in the fingerprinting coefficient estimate. The colour coding refers to the assumed value of S. Blue is S=0, which is the situation in which the X’s are measured without error so OLS is unbiased, which is why the blue line tracks the horizontal (zero bias) axis. From black to grey corresponds to S rising from 0 to just under 1, and red corresponds to S=1. Yellow and green correspond to S >1.

As you can see, if true beta=0, OLS is unbiased; but if beta = 1 or any other positive value, OLS is biased downward as expected. However the bias goes to zero as S goes to 0. In practice, you can shrink S by using averages of multiple ensemble runs.

Here are the biases for TLS in the same experiments:

There are some notable differences. First, the biases are usually large and positive, and they don’t necessarily go away even if S=0 (or S=1). If the true value of beta =1, then there are cases in which the TLS coefficient is unbiased. But how would you know if you are in that situation? You’d need to know what S is, and what the true value of beta is. But of course you don’t (if you did, you wouldn’t need to run the regression!)

What this means is that if an optimal fingerprinting regression yields a large positive coefficient on the ANTH signal this might mean GHG’s affect the climate, or it might mean that they don’t (the true value of beta=0) and TLS is simply biased. The researcher cannot tell which is the case just by looking at the regression results. In the paper I explain some diagnostics that help indicate if TLS can be used, but ultimately relying on TLS requires assuming you are in a situation in which TLS is reliable.

The results are particularly interesting when the true value of beta=0. A fingerprinting, or “signal detection” test starts by assuming beta=0 then constructing a t-statistic using the estimated coefficients. OLS and GLS are fine for this since if beta=0 the coefficient estimates are unbiased. But if beta=0 a t-statistic constructed using the TLS coefficient can be severely biased. The only cases in which TLS is reliably unbiased occur when beta is not zero. But you can’t run a test of beta=0 that depends on the assumption that beta is not zero. Any such test is spurious and meaningless.

Which means that the past 20 years worth of “signal detection” claims are likely meaningless unless steps were taken in the original articles to prove the suitability of TLS or verify its results with another nonbiased estimator.

I was unsuccessful in getting this paper published in the two climate science journals to which I submitted it. In both cases the point on which the paper was rejected was a (climatologist) referee insisting S is known in fingerprinting applications and always equals 1/root(n) where n is the number of runs in an ensemble mean. But S only takes that value if, for each ensemble member, S is assumed to equal 1. One reviewer conceded the possibility that S might be unknown but pointed out that it’s long been known TLS is unreliable in that case and I haven’t provided a solution to the problem.

In my submission to Environmetrics I provided the referee comments that had led to its rejection in climate journals and explained how I expanded the text to state why it is not appropriate to assume S=1. I also asked that at least one reviewer be a statistician, and as it turned out both were. One of them, after noting that statisticians and econometricians don’t like TLS, added:

“it seems to me that the target audience of the paper are practitioners using TLS quite acritically for climatological applications. How large is this community and how influential are conclusions drawn on the basis of TLS, say in the scientific debate concerning attribution?”

In my reply I did my best to explain its influence on the climatology field. I didn’t add, but could have, that 20 years’ worth of applications of TLS are ultimately what brought 100,000 bigwigs to Dubai for COP28 to demand the phaseout of the world’s best energy sources based on estimates of the role of anthropogenic forcings on the climate that are likely heavily overstated. Based on the political impact and economic consequences of its application, TLS is one of the most influential statistical methodologies in the world, despite experts viewing it as highly unreliable compared to readily available alternatives like IV.

Another reviewer said:

“TLS seems to generate always poor performances compared to the OLS. Nonetheless, TLS seems to be the ‘standard’ in fingerprint applications… why is the TLS so popular in physics-related applications?”

Good question! My guess is because it keeps generating answers that climatologists like and they have no incentive to come to terms with its weaknesses. But you don’t have to step far outside climatology to find genuine bewilderment that people use it instead of IV.

Conclusion

For more than 20 years climate scientists—virtually alone among scientific disciplines—have used TLS to estimate anthropogenic GHG signal coefficients despite its tendency to be unreliable unless some strong assumptions hold that in practice are unlikely to be true. Under conditions which easily arise in optimal fingerprinting, TLS yields estimates with large positive biases. Thus any study that has used TLS for optimal fingerprinting without verifying that it is appropriate in the specific data context has likely overstated the result.

In my paper I discuss how a researcher might go about trying to figure out whether TLS is justified in a specific application, but it’s not always possible. In many cases it would be better to use OLS even though it’s known to be biased downward. The problem is that TLS typically has even bigger biases in the opposite direction and there is no sure way of knowing how bad they are. These biases carry over to the topic of “carbon budgets” which are now being cited by courts in climate litigation including here in Canada. TLS-derived signal coefficients yield systematically underestimated carbon budgets.

The IV estimation method has been known at least since the 1960s to be asymptotically unbiased in the errors-in-variables case, yet climatologists don’t use it. So the predictable next question is why haven’t I done a fingerprinting regression using IV methods? I have, but it will be a while before I get the results written up and in the meantime the technique is widely known so anyone who wants to can try it and see what happens.

Sunday, December 17, 2023

Predicting aortic aneurysm with 98% accuracy

 Here is the link.

Here are some excerpts.

----------------------------------------

Northwestern University researchers have developed the first physics-based metric to predict whether or not a person might someday suffer an aortic aneurysm, a deadly condition that often causes no symptoms until it ruptures.

In the new study, the researchers forecasted abnormal aortic growth by measuring subtle "fluttering" in a patient's blood vessel. As blood flows through the aorta, it can cause the vessel wall to flutter, similar to how a banner ripples in the breeze. While stable flow predicts normal, natural growth, unstable flutter is highly predictive of future abnormal growth and potential rupture, the researchers found.

Called the "flutter instability parameter" (FIP), the new metric predicted future aneurysm with 98% accuracy on average three years after the FIP was first measured. To calculate a personalized FIP, patients only need a single 4D flow magnetic resonance imaging (MRI) scan.

Using the clinically measurable, predictive metric, physicians could prescribe medications to high-risk patients to intervene and potentially prevent the aorta from swelling to a dangerous size.

The research was published this week (Dec. 11) in the journal Nature Biomedical Engineering.

"Aortic aneurysms are colloquially referred to as 'silent killers' because they often go undetected until catastrophic dissection or rupture occurs," said Northwestern's Neelesh A. Patankar, senior author of the study. "The fundamental physics driving aneurysms has been unknown. As a result, there is no clinically approved protocol to predict them. Now, we have demonstrated the efficacy of a physics-based metric that helps predict future growth. This could be transformational in predicting cardiac pathologies."

An expert on fluid dynamics, Patankar is a professor of mechanical engineering at Northwestern's McCormick School of Engineering. He co-led the study with Dr. Tom Zhao, who specializes in first principles biomechanics.

Growing danger

An aortic aneurysm occurs when the aorta (the largest artery in the human body) swells to greater than 1.5 times its original size. As it grows, the aorta's wall weakens. Eventually, the wall becomes so weak that it can no longer withstand the pressure of blood flowing through it, causing the aorta to rupture. Although rare, an aortic rupture is usually unpredictable and almost always fatal.

Several prominent people have died from aortic aneurysm, including Grant Wahl, a sports journalist who died suddenly one year ago at the 2022 FIFA World Cup. Other celebrity deaths include John Ritter, Lucille Ball and Albert Einstein.

"Most people don't realize they have an aneurysm unless it is accidentally detected when they receive a scan for an unrelated issue," Patankar said. "If physicians detect it, they can suggest lifestyle changes or prescribe medication to lower blood pressure, heart rate and cholesterol. If it goes undetected, it can rupture, which is an immediate catastrophic event."

"If it ruptures when the person is outside of a hospital, the death rate is close to 100%," Zhao added. "The blood supply to the body stops, so critical organs like the brain can no longer function."
Removing the guesswork

For current standard of care, physicians estimate chance of rupture based on risk factors (such as age or smoking history) and the size of the aorta. To monitor a growing aorta, physicians track it with regular imaging scans. If the aorta starts to grow too quickly or become too large, then a patient often will undergo a surgical graft to reinforce the vessel wall, an invasive procedure that carries its own risks.

"Our collective lack of understanding makes it hard to monitor aneurysm progression," Zhao said. "Doctors need to regularly track the size of an aneurysm by imaging its location every one to five years depending on how fasts it grew previously and whether the patient has any associated diseases. Over this 'wait and see' period, an aneurysm can fatally burst."

To remove the guesswork from predicting future aneurysms, Patankar, Zhao and their collaborators sought to capture the fundamental physics underlying the problem. In extensive mathematical work and analyses, they discovered that problems arise when the fluttering vessel wall transitions from stable to unstable. This instability either causes or signals an aneurysm.

"Fluttering is a mechanical signature of future growth," Patankar said.

John Cochrane on the theory of regulation

 Here is the link to his blog.

JC is on target.

Freedom is waning fast.

Here are some excerpts.

--------------------------------------------

What's the basic story of economic regulation?

Econ 101 courses repeat the benevolent dictator theory of regulation: There is a "market failure," natural monopoly, externality, or asymmetric information. Benevolent regulators craft optimal restrictions to restore market order. In political life "consumer protection" is often cited, though it doesn't fit that economic structure.

Then "Chicago school" scholars such as George Stigler looked at how regulations actually operated. They found "regulatory capture." Businesses get cozy with regulators, and bit by bit regulations end up largely keeping competition down and prices up to benefit existing businesses.

We are, I think, seeing round three, and an opportunity for a fundamentally new basic view of how regulation operates today.

The latest news item to prod this thought is FCC Commissioner Brendan Carr's scathing dissent on the FCC's decision to cancel $885 million contract to Starlink. Via twitter/X:



Quoting from the dissent itself (my emphasis):

Last year, after Elon Musk acquired Twitter and used it to voice his own political and ideological views without a filter, President Biden gave federal agencies a greenlight to go after him. During a press conference at the White House, President Biden stood at a podium adorned with the official seal of the President of the United States, and expressed his view that Elon Musk “is worth being looked at.”1 When pressed by a reporter to explain how the government would look into Elon Musk, President Biden remarked: “There’s a lot of ways.”2 There certainly are. The Department of Justice, the Federal Aviation Administration, the Federal Trade Commission, the National Labor Relations Board, the U.S. Attorney for the Southern District of New York, and the U.S. Fish and Wildlife Service have all initiated investigations into Elon Musk or his businesses.

Today, the Federal Communications Commission adds itself to the growing list of administrative agencies that are taking action against Elon Musk’s businesses. I am not the first to notice a pattern here. Two months ago, The Wall Street Journal editorial board wrote that “the volume of government investigations into his businesses makes us wonder if the Biden Administration is targeting him for regulatory harassment.”3 After all, the editorial board added, Elon Musk has become “Progressive Enemy No. 1.” Today’s decision certainly fits the Biden Administration’s pattern of regulatory harassment. Indeed, the Commission’s decision today to revoke a 2020 award of $885 million to Elon Musk’s Starlink—an award that Starlink secured after agreeing to provide high-speed Internet service to over 640,000 rural homes and businesses across 35 states—is a decision that cannot be explained by any objective application of law, facts, or policy.

When the Biden administration launches an "all of government" initiative, they mean all of government.

A tweeter queries



Show me the man, and I'll find the crime. Three felonies a day.

In the same vein, I found most interesting in the twitter files and scathing Missouri V. Biden decision the question, just how did the government force tech companies to censor the government's political opponents? "Nice business you have there. It would be a shame if the alphabet soup agencies had to look into it."

This doesn't fit either the econ 101, benevolent nanny, or regulatory capture view. Fundamentally, regulators have captured the industry, not the other way around. They hold arbitrary discretionary power to impose huge costs or just shut down companies. They use this power to elicit political support from the companies. There is a bit of old Chicago school capture in the deal. Companies get protected markets. But the regulators now don't just want a few three martini lunches and a cozy revolving door to "consultant" jobs. They demand, political support. The regulators are more political ideologues than gently corruptible insiders.

Sometimes regulators seem to attack businesses just for fun, like suing a moving company for age discrimination. But maybe here too they are showing everyone what they can do, or scoring some ideological points so people get the message.

The increasing arbitrariness of regulation is part of the process. I find myself nostalgic for the good old days of the Administrative Procedures Act, public comment, cost benefit analysis, and formal rule making. Now regulators just write letters or take legal action, which even if unsuccessful can bankrupt a company. Using administrative courts, the regulators are prosecutor, judge, jury, and executioner all rolled in to one.

Unrelated. $885 million / 640,000 = $1,3825. The federal government apparently thinks it's worthwhile for taxpayers to pay $1,382 to give rural households access to satellite internet. If anyone asked, "would you rather $x in cash or a starlink account?" (which, I think, they also have to pay for) I wonder if x would be much more than $50.

Perspective on the Israeli – Hamas situation

 


Tuesday, December 12, 2023

Fast Shooting

Dave Anderson at the American Handgunner.

Here is the link

I Do Care

 Gustavo L. Franklin, MD, PhD at JAMA Neurology

I always strive to give attention to the patients I work with and seek to be kind and polite, even on the most difficult days for me. Like everyone, there are good days and bad days. I try to balance this with my family, and I always try not to get too involved, despite doing my best in my profession. However, today, in the last few minutes of a routine consultation, the husband of a patient with Alzheimer disease said to me that nobody cares. He said he tries not to care too, but he does care.

It startled me for a moment before he continued to say that each morning, he wakes up and doesn’t see his wife; he sees a child with a frightened look, a mere shadow of the person she once was, the one he loved, the one who raised their children. He gets up, speaks to her, but he is barely heard. Sometimes, he is not even remembered as she confuses him with someone else. When he feeds her, she dislikes the things she used to like. When he tells a joke, she no longer laughs. He shares all this with people, but nobody seems to care. When she does something wrong, he guides her; when she does something right, he praises her, but she seems not to care. He tells his family, and they say it’s the disease, there’s nothing to be done. It seems to him they don’t care either. He tells the physicians, and they say there’s no cure. He guesses they don’t care. The days go by, and they don’t get easier; they get harder. Much harder. He tries not to care, but he does care. He says she gave so much of herself to their family; she was the heart and soul. Today, the family doesn’t see her, doesn’t hear her, doesn’t feel her. Every night before going to sleep, he would kiss her and ask if she preferred the bathroom light on. He did that for 45 years, even knowing the answer, he asked every night. Tonight, when he goes to bed, whether he leaves it on or turns it off, it hardly matters. Sometimes he has moments of happiness when she remembers something or speaks as she used to, as if she were still his love. He confesses, it takes so little now to bring him joy. Other days, he says, he surrenders to despair. He says he tries not to care, but he does care. The other day, someone said to him that she will soon be gone. He spent a long two minutes trying to understand if that would be good or bad. Then they continued, saying that God knows what He’s doing. He says he tried to think of God, but he thinks He doesn’t care either....

When that gentle old man with a white beard finished pouring out his heart, I saw the significance of the physician facing the patient. The helplessness of patients and relatives in the face of insufficient medicine. Comfort may not be enough. From the height of the white coat’s sanctum and behind the marble of the desk that separated us, I could only remember Hippocrates’ phrase: “To cure when possible; to alleviate when necessary; to console always.” Then, I interrupted him: I care.

He cried. I cried.

Once again, I was reminded that we treat people, not diseases.

I confess. Sometimes, I try not to care.

But I do care.

Tuesday, December 05, 2023

FDA warns of rare but serious drug reaction to the antiseizure medicines le-vetiracetam (Keppra, Keppra XR, Elepsia XR, Spritam) and clobazam (Onfi, Sympazan)

FDA warns of rare but serious drug reaction to the antiseizure medicines levetiracetam (Keppra, Keppra XR, Elepsia XR, Spritam) and clobazam (Onfi, Sympazan) | FDA
The U.S. Food and Drug Administration (FDA) is warning that the antiseizure medicines levetiracetam (Keppra, Keppra XR, Elepsia XR, Spritam) and clobazam (Onfi, Sympazan), can cause a rare but serious reaction that can be life-threatening if not diagnosed and treated quickly. This reaction is called Drug Reaction with Eosinophilia and Systemic Symptoms (DRESS). It may start as a rash but can quickly progress, resulting in injury to internal organs, the need for hospitalization, and even death. As a result, we are requiring warnings about this risk to be added to the prescribing information and patient Medication Guides for these medicines.

This hypersensitivity reaction to these medicines is serious but rare. DRESS can include fever, rash, swollen lymph nodes, or injury to organs including the liver, kidneys, lungs, heart, or pancreas.

Monday, December 04, 2023

The Guardians of Democracy

 From Jonathan Turley.

JT is on target.

Democracy is under attack - most effectively by those who are claiming they are trying to protect it.

The intellectual dishonesty and just plain dishonesty in pursuit of power is astonishing. That so many voters are on board with it is even more astonishing (not really - what do you expect from the educational system).

Both Democrats and Repubicans fight for power. But, currently, it is the Democrats appear to be better at it and willing to do more damage to our freedom to obtain it - hence are more dangerous.

-----------------------------------

The Guardians of Democracy: Democrats Move to Protect Democracy from Itself

Below is my column in the Hill on efforts to bar or limit voting in the primary and general presidential elections. What is so striking is how these distinctly anti-democratic actions are being taken in the name of democracy.

Here is the column:

Across news sites, Democrats are warning of the imminent death of democracy. Hillary Clinton has warned that a Trump victory would be the end of democracy. MSNBC’s Rachel Maddow is warning of “executions.” Even actors like Robert DeNiro are predicting that this may be our very last democratic election.

Yet these harbingers of tyranny are increasingly pursuing the very course that will make their predictions come true. The Democratic Party is actively seeking to deny voters choices in this election, supposedly to save democracy.

Henry Ford once promised customers any color so long as it is black. Democrats are adopting the same approach to the election: You can have any candidate on the ballot, as long as it’s Joe Biden.

This week, the Executive Committee of the Florida Democratic Democracy told voters that they would not be allowed to vote against Biden. Even though he has opponents in the primary, the party leadership has ordered that only Biden will appear on the primary ballot.

And if you want to register your discontent with Biden with a write-in vote, forget about it. Under Florida law, if the party approves only one name, there will be no primary ballots at all. The party just called the election for Biden before a single vote has been cast.

This is not unprecedented. It happened with Barack Obama in 2012 and, on the Republican side, with George W. Bush in 2004. It was wrong then, and it is wrong now.

As Democratic presidential candidate Rep. Dean Phillips (D-Minn.) noted, “Americans would expect the absence of democracy in Tehran, not Tallahassee. Our mission as Democrats is to defeat authoritarians, not become them.”

In Iran, the mullahs routinely bar opposition candidates from ballots as “Guardians” of the ballots.

There is good reason for the Biden White House to want the election called before it is held. A CNN poll found that two out of three Democrats believe that the party should nominate someone else. A Wall Street Journal poll that found 73 percent of voters say Biden is “too old to run for president.”

The party leadership is solving that problem by depriving Democratic voters of a choice.

In other states, Democratic politicians and lawyers are pursuing a different strategy: “You can have any candidate, as long as it isn’t Trump.”

They are seeking to bar Trump from ballots under a novel theory about the 14th Amendment. In states from Colorado to Michigan, Democratic operatives are arguing that Trump must be taken off the ballots because he gave “aid and comfort” to an “insurrection or rebellion.” Other Democrats have called for more than 120 other Republicans to be stripped from the ballots under the same claim tied to the Jan. 6 Capitol riot.

This effort is being supported by academics such as Laurence Tribe, who previously called for Trump to be charged with the attempted murder of former Vice President Mike Pence.

In a recent filing supporting this effort, figures as prominent as media lawyer Floyd Abrams and Berkeley Dean Erwin Chemerinsky have told the Colorado Supreme Court that preventing voters from being able to cast their votes for Trump is just a way of “fostering democracy.” So long as courts believe that a candidate’s speech is “capable of triggering disqualification,” that speech is unprotected in their view.

I have long criticized this theory as legally and historically unfounded. It is also an extremely dangerous theory that would allow majorities in different states to ban opposing candidates in tit-for-tat actions.

So far, these efforts around the country have met with defeat in court after court, but the effort continues, and with the support of many in the media.

Some national polls show Trump as the most popular candidate for the 2024 election, while a few show Biden slightly ahead. Yet, despite 74 million voters supporting Trump in the last election, these Democrats are insisting that voters should not be allowed to vote for him, in the name of democracy.

In fairness to Democratic partisans like Clinton and Maddow, they could well be right. The 2024 election could well prove the end to democracy — if these efforts succeeded in purging ballots of opposing candidates.

It is all part of an electoral variation on the Vietnam War claim that it is sometimes necessary to destroy a village in order to save it.

Democrats claim to be right and to have the best of motivations, which is why they feel justified in saving democracy by denying it to the voters. After all, it is all about motivation where any means are justified. They are trying to save democracy by limiting it.

Thus, it is an assault on democracy for Republican lawyers to challenge elections based on alleged problems with voting machines, but it is protecting democracy for former Clinton general counsel (and founder of the “Democracy Docket”) Marc Elias to claim that a machine could flip the results in favor of the GOP.

In Tehran, a popular joke emerged after the “Guardian Council” approved only one candidate, Chief Justice Ebrahim Raisi, to appear on a ballot. Democracy, the joke went, was safe, because the Guardians would allow Raisi to run against six other spellings of his own name.

The American election guardians in Florida did one better. They have arranged for there to be no ballot at all. Who needs the pretense of a primary when you can simply dictate the result?

Yet, rest assured, you may be able to cast a vote for an approved slate of candidates of healthy choices. Consider it a type of “Big Gulp” election, where you are protected against your own bad choices like a sugary drink at 7-11.

Actor Seth Rogen has pledged to “vote for whoever is the Democrat. That’s all I need to know.” If these efforts are successful, many voters could be left with that single liberating choice.