Saturday, September 30, 2023

Who should-and Can-Get Lecanemab, the New Alzheimer Disease Drug

 From JAMA.

Here is the link.

Here are some excerpts.

----------------------------------

Aducanumab. Crezenumab. Donanemab. Gantenerumab. Lecanemab. Solanezumab.

Those multisyllable drug names may not trip off the tongue of even the most articulate, but they’re all treatments developed to slow down a disease that eventually robs people of the simplest of words.

With varying degrees of success, these monoclonal antibodies, or mAbs for short, bind to and remove a protein from the brain called amyloid-β, a hallmark of Alzheimer disease (AD). Researchers have focused on amyloid-β for 3 decades, ever since 2 UK scientists proposed that deposition of the protein in the brain was “the central event in the aetiology of Alzheimer’s disease”—what has come to be known as the amyloid cascade hypothesis. Although these mAbs clear amyloid-β, for the most part they have failed to slow the progression of cognitive decline in people with AD.

In July, however, lecanemab (marketed as Leqembi) became the first anti–amyloid-β mAb ever to receive traditional approval from the US Food and Drug Administration (FDA). Earlier this year, the FDA had granted lecanemab accelerated approval. That regulatory pathway is reserved for drugs to treat serious conditions for which there is an unmet medical need. Such drugs must affect a surrogate end point—in this case brain amyloid levels—that is “reasonably likely” to benefit patients clinically.

Lecanemab earned traditional approval—winning it broader Medicare coverage—because a confirmatory study required as a condition of accelerated approval verified its clinical benefit, according to the FDA.

Less than 2 weeks after lecanemab’s traditional approval, researchers reported in JAMA that donanemab significantly slowed clinical progression in patients with early Alzheimer disease after nearly 18 months of follow-up, and manufacturer Eli Lilly announced it had filed for traditional approval of the drug and expected a decision from the FDA by year’s end.

Medicare is expected to pay for the bulk of treatment with anti–amyloid-β mAbs in the US. For drugs in that class that receive accelerated approval, the agency will cover costs only for beneficiaries in randomized trials. Traditional approval means Medicare will pay for treatment outside clinical trials as long as clinicians submit information about their patients to a registry designed to collect information about real-world use.

Still, lecanemab treatment is expensive, with an annual wholesale acquisition cost of $26 500 for average-weight patients for the drug alone; for some patients, drug co-pays could amount to roughly $14.50 per day. It’s also cumbersome, requiring every-other-week infusions plus multiple magnetic resonance imaging (MRI) scans, and carries the risk of a life-threatening event. And it’s far from a cure, with some skeptics questioning whether patients and their families will notice any treatment benefit. All these factors could limit its uptake.

-----

Mayo Clinic neurologist David Knopman, MD, was one of 3 members of the FDA advisory committee that reviewed aducanumab who resigned after the agency approved it. (He had recused himself from that particular meeting because he had served as site principal investigator for a trial of the drug.)

Knopman and his colleagues are preparing to administer lecanemab, but he’s not sure how much patients stand to gain from it.

In a phase 3 trial, “it delayed progression at 18 months by about 5 months,” Knopman noted in an interview. “Is that clinically meaningful? I don’t know. What really counts is where you are at 36 months. Is it still a 5-month delay? That’s trivial.” On the other hand, he said, if the difference between treated patients and untreated patients continues to increase as time passes, “it’s a win.”

Questions about long-term clinical benefit aren’t the only ones that make him anxious, Knopman said. Post hoc subgroup analyses of clinical trial data, reported in a supplement, found that lecanemab didn’t seem to benefit much patients younger than 65 years or women, he explained.

Women are twice as likely as men to develop Alzheimer disease, and the authors of a recent Viewpoint in JAMA Neurology expressed disappointment that phase 3 trials of lecanemab and aducanemab did not expand sex-disaggregated analyses in the main reporting of results. The supplement for the lecanemab trial publication “revealed noteworthy sex differences,” the Viewpoint authors wrote. Although the trial found that, overall, lecanemab delayed progression by 27%, the difference between the treated and placebo groups was 43% in men and only 12% in women. Similar discrepancies were seen in the aducanumab trials, the Viewpoint authors pointed out.

Thursday, September 28, 2023

Elected Office as a Family Business

 From the Government Accountability Institutue.

Here is the link.

Here is the first item.

---------------------------------------------------

The Jim Clyburn Endorsements

Political endorsements are used during elections to transfer the political popularity of one politician to another. In high stakes elections involving the presidency, regional endorsements by candidates are often sought to increase name recognition and convey acceptance in a state ahead of a primary election.

On June 2, 2008, South Carolina Rep. James Clyburn, the third-ranking Democrat in the US House of Representatives announced he would endorse Barack Obama for president. Obama went on to win the South Carolina primary, defeat Hillary Clinton to win the Democratic nomination and then defeated Republican Senator John McCain to win the presidency.

Less than one year after the Clyburn endorsement, on April 29, 2009, the Wall Street Journal reported that President Obama had nominated Mignon Clyburn, the daughter of Jim Clyburn, to an open seat on the powerful Federal Communications Commission (FCC).

At the time the job paid approximately $150,000 a year, but more importantly, it provided Clyburn access to some of the most powerful industry representatives during a transformative time in telecommunications.

Clyburn was reappointed by President Obama for another term in 2013, before finally leaving the FCC in 2018.4 However, eight months after leaving the FCC, Mignon was hired by the telecommunications company T-Mobile to offer advice about its merger with Sprint. The FCC eventually approved the merger that was completed in 2020.

During her time on the FCC, 2009-2018, the amount of political contributions made to her father’s campaign accounts from the communications industry increased relative to other industries. Another Clyburn presidential endorsement came ahead of the 2020 presidential primary and was also be followed by another presidential appointment involving the Clyburn family.

On February 26, 2020, Representative Clyburn endorsed Joe Biden ahead of South Carolina primary. Biden won the South Carolina primary, the Democratic nomination, and the presidency.

On November 16, 2020, Mignon Clyburn was appointed to President Biden’s FCC Review Team, which was tasked with assisting the transition for the new administration.

This appointment was followed by the introduction of communications-related legislation by her father. On March 11, 2021, it was announced that US Senator Amy Klobuchar (D-MN), co-chair of the Senate Broadband Caucus, and House Majority Whip James E. Clyburn (D-SC) introduced comprehensive broadband infrastructure legislation to expand access to affordable high-speed internet for all Americans. The legislation would invest over $94 billion to build high-speed broadband infrastructure in underserved communities. According to reports, the legislation would benefit one of Mignon Clyburn’s former clients, T-Mobile.

Which came first – the chicken (CO2) or the egg (temperature change)? Looks like it might be the egg

 From Judith Curry's blog.

Here is the link.

Here is the text.

---------------------------------------

On the chicken-and-egg problem of CO2 and temperature.

Bare facts vs. mechanism

A car is travelling at 80 km/h, and a ray of light is travelling parallel to the car, in the same direction. Its speed relative to the Earth is 300,000 km/s. What is its speed relative to the car? Today we know that the answer “300,000 km/s minus 80 km/h” is wrong. But in 1887, people thought that it was self-evident and undisputable—after all, it’s basic logic and simple arithmetic. At that time, physicists Michelson and Morley had devised a method with sufficient accuracy to measure the small differences in the speed of light, and in an effort to discover details about its movement, they conducted one of the most famous experiments in the history of science. The results were baffling. The speed of light was constant in all directions—the direction of the Earth’s movement, the opposite direction, and the perpendicular direction. There was no explanation for that—it defied all logic.

However, we have to look at the bare facts, regardless how impossible they seem. Michelson and Morley did not feel compelled to provide an alternative theory of light, or of anything. They concluded that their results “refute Fresnel’s explanation of aberration” and that Lorentz’s theory “also fails.” Had they written “we have no idea what’s going on” it would have been the same. Making their negative results public opened the road to further research. It was a long road, and it took almost twenty years of work by distinguished scientists before arriving at the theory of relativity.

It goes without saying that this is hardly the first or the last mystery in the history of science. One that is still unsolved is the changing mass of the International Prototype of the Kilogram. Until a few years ago, the kilogram was defined as the mass of a platinum-iridium object stored in the International Bureau of Weights and Measures in Paris. It has been found that its mass changes over time by something like 0.000005% per century, and no-one knows why exactly. That no-one knows the mechanism does not alter the fact that the mass does change.

How a clear case of causality can become a noisy mess

Imagine a beach being hit by small waves. Once in a while, a series of noticeably larger waves arrive. There’s a port 10 km further, and ships are departing from it. We might notice that the departures of the ships are correlated to the instances of larger waves, and suspect that there could be a causal relationship.

In reality, in this case we understand the mechanism through which the ships cause the waves; but if we assume we don’t, here is how we might try to investigate: we might draw a chart like the following, where the horizontal axis is time, the orange line shows ship departures (the vertical axis showing the size of the ship) and the blue line shows sea level. If every departure was reliably followed by a temporary increase in wave height, we could conclude that the departures of the ships potentially cause the increase in wave height, especially if we noticed that the size of the ship is correlated to the size of the increase in wave height.

Screen Shot 2023-09-26 at 10.19.05 AM

We say “potentially” because we can never be certain about causation. It could be that the departures and the waves both have a common cause. Even if someone was shot in the head, we can’t be certain it was the bullet that killed him—he might have suffered a stroke just before the bullet entered his brain (Agatha Christie’s Poirot has resolved several mysteries of similar type). So we can hardly be 100% certain that X causes Y. One thing is clear, however: the waves do not cause the ships to depart. The reason is that first the ship departs and later the waves hit the beach. The effect cannot precede the cause.

Even in this simple case where there’s an impulse (the departing ship) followed by a response, things can quickly get complicated. Ships could be going in many different directions, and the response would not always appear in an equal time interval after the impulse. For some impulses the response could be totally absent (e.g. for ships that depart in a direction away from the beach). The interval between departures could be smaller than the time it takes for the response to arrive, and the intertwining of impulses and responses could be confusing. Sometimes responses might appear out of the blue, without impulse (for example, there could be arriving ships that cause that, which we might not have taken into account). It might not be as easy to distinguish the wave response from the other waves if the sea is rough. Add all these factors together, and the blue line could be a big noisy mess.

And in a real world example, like in the question of whether CO₂ concentration affects the temperature, both lines can be a big noisy mess.

Investigating potential causes

So here is the question: given two processes, how can we determine if one is a potential cause of the other? We deal with this question in two papers we published last year in the Proceedings of the Royal Society A (PRSA): Revisiting causality using stochastics: 1. Theory (preprint); 2. Applications (preprint). We reviewed existing theories of causation, notably probabilistic theories, and found that all of them have considerable limitations.

For example, Granger’s theory and statistical test have already been known to be identifying correlation (for making predictions), not causation, despite the popular term “Granger causality”. What is more, they ignore the fact that processes exhibit dependence in time. Hence, formally testing hypotheses in geophysics by such tests can be inaccurate by orders of magnitude due to that dependence.

As another example, Pearl’s theories make use of causal graphs, in which the possible direction of causation is assumed to be known a priori. This implies that we already have a way of identifying causes. Moreover, insofar as those theories assume, in their use of the chain rule for conditional probabilities, that the causality links in the causal graphs are of Markovian type, their application to complex systems is problematic.

Another misconception in some of earlier studies is the aspiration that by using a statistical concept other than the correlation coefficient (e.g. a measure of information) we can detect genuine causality.

Having identified the weaknesses in existing theories and methodologies, we proceeded to develop a new method to study the question whether process X is a potential cause of process Y, or the other way round. This has several key characteristics which distinguish it from existing methods.

  • Our framework is for open systems (in particular, geophysical systems), in which:
    • External influences cannot be controlled or excluded.
    • Only a single realization is possible—repeatability of a geophysical process is infeasible.
  • Our framework is not formulated on the basis of events, but of stochastic processes. In these:
    • Time runs continuously. It is not a sequence of discrete time instances.
    • There is dependence in time.
  • It is understood that only necessary conditions of causality can be investigated using stochastics (or other computational tools and theories)—not sufficient ones. The usefulness of this, less ambitious, objective of seeking necessary conditions lies in their ability:
    • To falsify an assumed causality.
    • To add statistical evidence, in an inductive context, for potential causality and its direction.

The only “hard” requirement kept from previous studies is the temporal precedence of the cause over the effect. Sometimes it can happen that causation goes both ways; for example, hens lay eggs and eggs hatch into hens (and it was Plutarch who first used the metaphor of hen and egg for this problem). Conveniently, we call such systems “potentially hen-or-egg causal”. Our method also identifies these, and also determines in these cases which of the two directions is dominant.

To deal with dependence in time, often manifested in high autocorrelation of the processes, we proposed the differencing of the time series, which substantially decreases the autocorrelation. In other words, instead of investigating the processes X and Y and find spurious results (as has been the case in several earlier studies), we study the changes thereof in time, ΔX and ΔY.

A final prominent characteristic of our method is its simplicity. It uses the data per se, rather than involved transformations thereof such as the cross- and auto-correlation functions or their Fourier transforms —the power spectra and cross-spectra. The results are thus more reliable and easier to interpret.

Atmospheric temperature and CO₂ concentration

In our PRSA papers we implemented our method in several case studies, such as rainfall-runoff and El Niño-temperature. One of the case studies was CO₂ concentration and temperature, and this one gave strong indications that temperature is potentially the cause and CO₂ the effect, while the opposite causality direction can be excluded as violating the necessary condition of time precedence.

However, the scope of these two papers was to formulate a general methodology for the detection of causality rather than to study a specific system in detail, and the case studies were brief. With regard to the relationship between temperature and CO₂ concentration, we hadn’t gone into details as to the effect of seasonality and time scale, or the exploration of many sources of data. So in our latest paper, published a week ago in Sci (“On hens, eggs, temperatures and CO2: Causal links in Earth’s atmosphere”), we studied the issue in detail. We used CO₂ data from Mauna Loa and from the South Pole, and temperature data from various sources (our published results are for the NCAR/NCEP reanalysis, but in the previous papers we used satellite data too). We used both historical data and the outputs of climatic models. We examined time scales ranging from months to decades.

The results are clear: changes in CO₂ concentration cannot be a cause of temperature changes. On the contrary, temperature change is a potential cause of CO₂ change on all time scales. As we conclude in the paper, “All evidence resulting from the analyses of the longest available modern time series of atmospheric concentration of [CO₂] at Mauna Loa, Hawaii, along with that of globally averaged  T, suggests a unidirectional, potentially causal link with  T as the cause and [CO₂] as the effect. This direction of causality holds for the entire period covered by the observations (more than 60 years).”

The math is a bit too complicated to present here. However all three papers have been reviewed extensively by referees and editors (notice in the last paper that four editors were involved as seen on the front page of the paper). The results in the earlier papers were criticized, formally by a commentary in the same journal and informally in blogs and social media. Some concerns expressed by critics, such as about lengths of time series, effect of seasonality, effect of timescale, are dealt with in this new paper. No-one has however developed any critique of the methodology.

In addition, the following graphic (taken from the graphical abstract of the paper and inserted here as a quiz) aims to make things even clearer. In this we plot the time series on the annual scale to avoid too many points. Hopefully even the annual scale of this graph (in contrast to the monthly scale we used in our detailed results) suffices to suggest that there is very little doubt as to the potential causality direction.

Screen Shot 2023-09-26 at 10.18.28 AM

Do climate models faithfully represent the causality direction found in the real world data? This question is also investigated in our new paper. The reply is clearly negative: the models suggest a causality direction opposite to the one found when the real measurements are used. Thus, our methodology defines a type of data analysis that, regardless of the claims we infer about the detection of causality per se, assesses modelling performance by comparing observational data with model results. In this, it contributes in studying an epistemological problem and, in particular, it casts doubt over the widespread claims that “in silico experimentation” with climate models is the only option we have and that this can be justified by the (insufficiently validated) assumption of an “increasing realism of climate system models”.

One might think that the potential causality direction we determined is counterintuitive in the light of the well-known greenhouse effect, and that the effect of temperature on CO₂ concentration would be subtle. But no, it is quite pronounced. In fact, human emissions are only 4% of the total, natural emissions dominate, and the increase of the latter because of temperature rise is more than three times the human emissions. This it is visible in a graph we included in an Appendix to the paper.

Screen Shot 2023-09-26 at 10.17.11 AM

Figure A1 from Koutsoyiannis et al. (2023): Annual carbon balance in the Earth’s atmosphere in Gt C/year, based on the IPCC estimates. The balance of 5.1 Gt C/year is the annual accumulation of carbon (in the form of CO2) in the atmosphere.

Of course, several questions remain. Why does the temperature increase? And why does the temperature rise potentially cause an increase in CO₂ concentration? Is the temperature change a real cause of the CO₂ concentration change, or could they both be the result of some further causal factor? It’s not hard to speculate. Yet we briefly investigate quantitatively possible mechanisms for these causal relationship in the appendices to the paper. However, if we stick to the facts, two things are clear: (i) changes in CO₂ concentration have not been warming the planet; (ii) climate models do not reflect what the observational data tell us on this issue.

JC comment:  I find this analysis to be very interesting.  The global carbon cycle is definitely “unsettled science.”  I think what this paper shows is that CO2 is an internal feedback in the climate system, not a forcing (I think that Granger causality would reveal this?). Yes, this all depends on how we define the system, and humans and their emissions are currently acting outside of the system in most climate models and are considered as an external forcing.  Again, as emphasized in the paper, human emissions are small fraction of natural emissions so this issue of internal versus external isn’t straightforward.  By analogy, in the 1970’s climate models specified cloud cover, and hence clouds acted as an external forcing.  However, clouds vary in response to the climate, and now with interactive clouds, clouds are now correctly regarded as a feedback and not a forcing.

Invade Taiwan? Encounter A "Hellscape"

 From Aviation Week.

Here is the link.

------------------------------------------

A new operational concept within the U.S. Indo-Pacific Command proposes to use a horde of drones to turn the Taiwan Strait into a “hellscape” if China attempts to invade Taiwan.

To realize that vision, the Defense Department has created several new efforts to solve the industrial, bureaucratic and command-and-control issues posed by unleashing thousands of drones simultaneously into the air, water and land around the roughly 100-nm channel between mainland China and Taiwan.

The concept’s plans draw on Ukraine lessons

DARPA supports Replicator with new autonomy program

The multiple projects, including the Pentagon’s recently announced Replicator and DARPA’s Rapid Experimental Missionized Autonomy (REMA) programs, seek to enhance and accelerate the programs of record already underway by each of the armed services to field tens of thousands of drones—including uncrewed air vehicles (UAV), uncrewed surface vessels (USV) and uncrewed ground vehicles—in the next several years.

As the Army leadership prepares to address the Association of the U.S. Army’s Annual Meeting that begins Oct. 9, the service’s acquisition officials are working behind the scenes to offer several ongoing drone acquisition programs for the Pentagon’s new fast-tracking and capability upgrade efforts, such as Replicator and REMA.

Speaking to reporters on Sept. 20, Doug Bush, the Army’s assistant secretary for acquisition, logistics and technology, said the service is working with the Office of the Secretary of Defense to understand how the existing programs could be included in the Replicator effort, but added that he hopes such a move would result in additional funding for the ongoing programs.

“In the unmanned aircraft space—[which is] especially where I think we could go faster—we’re limited by funding at this point,” Bush said. “But I think we’ve got some good systems that with more funding and some help on accelerating the process, the Army absolutely could contribute to the overall Replicator efforts.”

Heeding lessons from the Russia--Ukraine war, the Army already plans to award a contract next spring for up to 12,000 small quadcopter drones under the Short-Range Reconnaissance (SRR) program. Meanwhile, efforts continue to field thousands of Air-Launched Effects (ALE) as part of the Future Attack Reconnaissance Aircraft ecosystem of sensors and munitions. That ecosystem includes the tube-launched ALE-Small, which includes uncrewed aircraft systems (UAS) such as the Anduril Altius, and ALE-Large, a more secretive project that is known to involve at least UAS such as the L3Harris Technologies Red Wolf.

The Army programs would add to thousands of more drones of all sizes and performance levels in development or production across the U.S. military. In the conflicts in Iraq and Afghanistan of the last two decades, only dozens of uncrewed systems were in operation at any single time. In the decade ahead, thousands of drones could perform missions simultaneously. The transition is being informed by ongoing experiments such as the Navy’s Task Force 59 and the Air Force’s Task Force 99.

Lessons from those experiments have been folded into Hellscape, an operating concept developed over several years in secret. Adm. John Aquilino, commander of Indo-Pacific Command, revealed the idea in cryptic fashion on Aug. 28 at the Emerging Technologies for Defense Conference and Exhibition.

“The components in Indo-Pacom have been experimenting for the past 5-10 years with many of those unmanned capabilities. Those will be an asymmetric advantage,” Aquilino said. “So operational concepts that we’re working through are going to help amplify our advantages in this theater. There is a term, ‘Hellscape,’ that we use.”

The concept appears to mark a departure from operating concepts focused on employing limited numbers of expensive and exquisite weapon systems against an array of enemy targets. Naval attacks by Ukraine’s military have shown the power of weaponizing thousands of disposable, commercial drones against enemy formations, as well as targeting enemy ships in the Black Sea with large numbers of crude USVs that function like an overwhelming salvo of guided torpedoes.

Any invasion of Taiwan by China would require transporting an amphibious invasion force across the strait. The transit would expose dozens or even hundreds of large ships to drone attacks from the time they are loaded in port until the troops attempt to disembark along the Taiwanese coast. Modern naval ships are equipped with a range of defensive countermeasures to thwart a limited number of attacks from drones, torpedoes and anti-ship missiles. But the Hellscape concept proposes a way to overwhelm those countermeasures with dozens or hundreds of simultaneous drone attacks.

“I think [Hellscape] is creating a chaotic, unpredictable situation in the Taiwan Strait using unmanned systems—mostly surface systems, but maybe also some undersea systems and ones that are relatively inexpensive,” Bryan Clark, a former Navy submariner and now the director of the Center for Defense Concepts and Technology at the Hudson Institute, told Aviation Week.

“They’re taking the cue . . . from what the Ukrainians have done and [asking], ‘Can we apply that model in the Taiwan Strait?’ Because you’ve got a similar, kind of, fish-in-a-barrel sort of opportunity,” Clark added.

The concept may not be unique to the Indo-Pacific Command. Seeking to bolster asymmetric defensive capabilities against a Chinese amphibious invasion, the 2023 National Defense Report published on Sept. 12 by Taiwan’s defense ministry calls for buying 7,000 commercial drones and 700 military-grade UAVs in the next five years.

Similarly, the U.S. Defense Department unveiled the Replicator concept in late August, calling for fielding thousands of “attritable” drones across all domains within 18-24 months. Many details of the concept are still being worked out, but early descriptions by defense officials suggest the concept’s purpose is not about creating a new program of record. Instead, the goal is to create a new acquisition process that can support the speed and flexibility needed to make Indo-Pacific Command’s Hellscape possible.

“Let’s be crystal clear: Replicator is not a new program of record. We’re not creating a new bureaucracy,” Deputy Defense Secretary Kathleen Hicks said on Sept. 6 at the Defense News Conference. “And we will not be asking for new money in [fiscal 2024]. Not all problems need new money; we are problem solvers, and we intend to self-solve.”

The first priority for Replicator is to develop a new software platform. To overcome the Chinese military’s advantage in numbers, Replicator aims to focus initially on developing a capability for all-domain, attritable autonomy (ADA2), Hicks said. Details remain scarce, but her comments indicate that such a common software suite could be rapidly integrated on different types of drones, whether they are designed to fly, float or drive.

Two weeks after Hicks unveiled the Replicator concept, DARPA revealed a new program with goals similar to the ADA2 effort. REMA, launched on Sept. 12, proposes to develop a hardware adapter for different types of commercial drones and create a common mission autonomy suite that can be hosted on that adapter. Moreover, the goal of the REMA program is to produce the initial version of adapter and software system within 18 months, which aligns with the schedule for fielding the Replicator concept.

The drones that are equipped with the REMA software would have special abilities unavailable on standard commercial drones. If a control or communications link to the drone is lost, the REMA autonomy software would allow them to continue performing some aspects of their mission by making their own decisions. Software updates and upgrades could be developed, tested and released to all of the REMA adapter-equipped drones in monthly cycles.

“REMA sounds like it’s pretty much aligned with this idea,” Clark said. “We want to make the [Replicator] vehicles as commoditized as possible, and we’re going to focus our technology development effort on the application layer that rides on top of their control software.”

Meanwhile, the armed services are continuing to develop military-grade drones. The SRR program awarded a contract to Skydio last year to deliver 1,000 RQ-28A quadcopters. A follow-on award is expected next spring for up to 12,000 more drones, with Skydio, Teal and Vantage Robotics competing for the order.

Sunday, September 24, 2023

A reason for school choice

 From Jonathan Turley.

JT is on target.

The problem extends far beyond the particular cases JT mentions.

Why isn't there more school choice and more Charter Schools - the Teachers Union and the politicians it supports.

As is so often true - the Government is the problem, not the solution. For the most part, choice produces better results, hence is a better strategy.

---------------------------------------

Forty Percent of Baltimore’s Public Schools Do Not Have a Single Student Proficient in Math

I have previously written about the near total meltdown of our public education system in some major cities. Prominent in these discussions has been Baltimore, which continues to fail inner city children in teaching the most basic subjects. This week, that failure is on full display with a report that forty percent of Baltimore’s schools lack a single student who has achieved grade-level proficiency in math. In various cities, the response of administrators has often been to lower the standards to continue to move kids out of the system without the skills needed to thrive in this economy.

In a prior column, I was particularly moved by the frustration of a mother in Baltimore who complained that her son was in the top half of his class despite failing all but three of his classes. Graduating students without proficiency in English or Math is the worst possible path for these students, schools and society.

The crisis continues with the new report that looked at 32 high schools administering the standardized test and found that 13 produced no students who proved proficient in math. Three-fourths of the Baltimore students taking the test were given the lowest possible score of one out of four.

At the five “elite” high schools, only 11.4 percent of students were math proficient.

We previously discussed the Baltimore public educational system as an example of where billions of dollars have been spent on a system with continuing failing scores and standards. Recent data adds another chilling statistic: 41 percent of students in the Baltimore system have a 1.0 (D) GPA or less.

Public schools and boards are making the case for school choice advocates with failing scores and rising controversies.

Baltimore City Public Schools responded to this shocking report with an effective shrug: “We acknowledge that some of our high school students continue to experience challenges in math following the pandemic, especially if they were struggling beforehand.” However, the system was failing these students long before the pandemic.

BCPS has a $1.7 billion dollar budget and was given an addition $799 million of federal Elementary and Secondary School Emergency Relief (ESSER) funds this school year. Despite the massive infusion of money, the administrators have demonstrably failed these students who are left with few options in the workplace beyond low-level jobs.

Baltimore is not alone. The entire state of Minnesota reported a zero percent math proficiency rate in 75 of its schools during the 2022-23 school year.

What is baffling is that voters do not blame their political leadership for this disaster. Whole generations are being lost due to the inability of these districts to reach mere proficiency on basic subjects. Yet, there seems few political consequences for political leaders. Many seem to just accept that this is the fate of inner city children as politicians focus on other issues.

Again, the response of the Baltimore school district is maddening: “The work is underway to improve outcomes for students. But treating student achievement as an ‘if-then’ proposition does a great disservice to our community.”

I am not sure what the “if-then proposition” may be, but the greatest disservice to the community is the failure to offer these inner city kids a basic education to be able to succeed in the workplace. The “work has been underway” for decades with lost generations of kids lured into criminal activities by the lack of any real opportunity to advance in our society. As a father of four, I cannot imagine how desperate many of these parents must be in cities like Baltimore where schools offer little hope for the future.

We have been discussing these low scores for years with little progress. Baltimore and other cities simply demand more money while deflecting any responsibility for their poor records. The true cost is not borne by the teachers, the unions, or the administrators. It is borne by these families who see the same failures replicated in every generation, processing their children out of school without needed skills.

Saturday, September 16, 2023

Woke hypocrisy and stupidity

 From Victor Davis Hanson. DH is on target.

-----------------------------

When the progressive woke revolution took over traditional America, matters soon reached the level of the ridiculous.

Take the following examples of woke craziness and hypocrisy, perhaps last best witnessed during Mao Zedong's Cultural Revolution.

The Biden administration from its outset wished to neuter immigration law. It sought to alter radically the demography of the U.S. by stopping the border wall and allowing into the United States anyone who could walk across the southern border.

Over seven million did just that. Meanwhile, President Joe Biden ignored the role of the Mexican cartels in causing nearly 100,000 ANNUAL American fentanyl deaths.

Then border states finally wised up.

They grasped that the entire open-borders, ”new Democratic majority” leftwing braggadocio was predicated on its hypocritical architects staying as far away as possible from their new constituents.

So cash strapped border states started busing their illegal aliens to sanctuary blue-state jurisdictions.

Almost immediately, once magnanimous liberals, whether in Martha's Vineyard, Chicago, or Manhattan, stopped virtue-signaling their support for open borders.

Instead, soon they went berserk over the influx.

So now an embarrassed Biden administration still wishes illegal aliens to keep coming, but to stay far away from their advocates — by forcing them to remain in Texas.

That means the president has redefined the U.S. border. It rests now apparently north of Texas, as Biden cedes sovereignty to Mexico.

Pre-civilizational greens in California prefer blowing up dams to building them.

They couldn't care less that their targeted reservoirs help store water in drought, prevent flooding, enhance irrigation, offer recreation, and generate clean hydroelectric power.

Now an absurd green California is currently destroying four dams on the Klamath River. In adding insult to injury, it is paying the half-billion dollar demolition cost in part through a water bond that state voters once thought would build new — not explode existing — dams.

The Biden administration is mandating new dates when electric vehicles will be all but mandatory.

To prove their current viability, Energy Secretary Jennifer Granholm led a performance art EV caravan on a long road trip.

When she found insufficient charging stations to continue her media stunt, she sent a gas-powered car ahead to block open charging stations and deny them to other EVs ahead in line.

Only that way could Granholm ensure that her arriving energy-starved motorcade might find rare empty charger stalls.

In some California charging stations, diesel generators are needed to produce enough ”clean” electricity to power the stalls.

The state has steadily dismantled many of its nuclear, oil, and coal power plants. It refuses to build new natural gas generation plants.

Naturally, California's heavily subsidized solar and wind plants now produce too much energy during the day and almost nothing at night.

So the state now begs residents to charge their EVs only during the day. Then at night, Californians may soon be asked to plug them in again to transfer what is left in their batteries into the state grid.

Apparently only that way will there be enough expropriated ”green” electricity for 41 million state residents after dark.

One of the loudest leftist voices to defund the police, and decriminalize violent crimes in the post-George Floyd era, was Shivanthi Sathanandan, the 2nd vice chairwoman of the Minnesota Democratic-Farmer-Labor Party.

She was recently not shy about defunding: ”We are going to dismantle the Minneapolis Police Department. Say it with me. DISMANTLE.”

But recently the loud Sathanandan was a victim of the very crime wave she helped to spawn.

Last week, four armed thugs carjacked her automobile. They beat her up in front of her children at her own home, and sped off without fear of arrest.

The reaction of the arch police dismantler and decriminalizer on her road to Damascus?

The now bruised and bleeding activist for the first time became livid that criminals had taken over her Minneapolis: ”Look at my face. REMEMBER ME when you are thinking about supporting letting juveniles and young people out of custody to roam our streets instead of HOLDING THEM ACCOUNTABLE FOR THEIR ACTIONS.”

Andrea Smith was an ethnic studies professor at the University of California, Riverside. But now she has been forced out after getting caught lying that she was Native American.

Prior to her outing, she was well known for damning ”white women” (like herself) who opted to ”become Indians” out of guilt, and (like her) for careerist advantage.

The common theme of these absurdities is how contrary to human nature, impractical, and destructive is utopian wokism, whether in matters of energy, race, crime, or illegal immigration.

There are two other characteristics of the Woke Revolution.

One, it depends solely on its advocates never having to experience firsthand any of the nonsense they inflict on others.

And two, dangerous zealots with titles before, and letters after, their names prove to be quite stupid — and dangerous.

Thursday, September 14, 2023

Detecting schizophrenia with AI

 From Nature.com.

Here is the link.

Here are some excerpts.

-----------------------------------------

Detecting schizophrenia with 3D structural brain MRI using deep learning

Abstract

Schizophrenia is a chronic neuropsychiatric disorder that causes distinct structural alterations within the brain. We hypothesize that deep learning applied to a structural neuroimaging dataset could detect disease-related alteration and improve classification and diagnostic accuracy. We tested this hypothesis using a single, widely available, and conventional T1-weighted MRI scan, from which we extracted the 3D whole-brain structure using standard post-processing methods. A deep learning model was then developed, optimized, and evaluated on three open datasets with T1-weighted MRI scans of patients with schizophrenia. Our proposed model outperformed the benchmark model, which was also trained with structural MR images using a 3D CNN architecture. Our model is capable of almost perfectly (area under the ROC curve = 0.987) distinguishing schizophrenia patients from healthy controls on unseen structural MRI scans. Regional analysis localized subcortical regions and ventricles as the most predictive brain regions. Subcortical structures serve a pivotal role in cognitive, affective, and social functions in humans, and structural abnormalities of these regions have been associated with schizophrenia. Our finding corroborates that schizophrenia is associated with widespread alterations in subcortical brain structure and the subcortical structural information provides prominent features in diagnostic classification. Together, these results further demonstrate the potential of deep learning to improve schizophrenia diagnosis and identify its structural neuroimaging signatures from a single, standard T1-weighted brain MRI.

Introduction

Schizophrenia is a progressive neuropsychiatric disorder that is characterized by structural changes within the brain. Recent findings from a large meta-analysis suggest that schizophrenia is associated with gray matter reductions across multiple subcortical regions including the hippocampus, amygdala, caudate, and thalamus, with structural changes in shape within those regions supporting changes in functional brain networks1. In addition to the altered shape of such brain structures, schizophrenia is also associated with significantly greater mean volume variability of the temporal cortex, thalamus, putamen, and third ventricle2. Other studies also affirm the enlargement of ventricles in schizophrenia3,4. While gray matter reductions are most consistently reported in the subcortical regions, reductions have also been identified in areas such as the prefrontal, temporal, cingulate, and cerebellar cortices5,6. Loss of gray matter volume has been shown to not only mark the onset of schizophrenia but also progress alongside the illness7.

Despite these documented changes, accurate and rapid detection of schizophrenia remains a pressing problem; previous studies are limited to only characterizing structural abnormalities at a group level, with no concrete method to make individual diagnoses at a subject level. Additionally, the diagnosis of schizophrenia based on DSM-5 criteria is costly both in terms of time and resources, without ensuring objectivity. Therefore, it is imperative to develop an objective screening tool to diagnose schizophrenia and potentially improve patient prognosis by allowing for earlier intervention.

Various attempts have been proposed to take advantage of the structural alterations present in schizophrenia for classification using neuroimaging data. Machine learning algorithms have historically presented the ability to classify psychiatric disorders in this manner8,9. In particular, the support vector machine (SVM), a supervised learning algorithm able to capture non-linear patterns in high-dimensional data, has been most prevalent in schizophrenia classification. Other popular machine learning algorithms for schizophrenia classification include multivariate pattern analysis, linear discriminant analysis, and random forest8,10. While standard machine learning approaches have demonstrated compelling results, their performance highly depends on the validity of manually extracted features8. Such features are traditionally extracted based on a combination of previously known disease characteristics and automatic feature selection algorithms11. These features may not completely encode the subtle neurological differences associated with schizophrenia; alternatively, they may encode too much unnecessary information requiring additional feature reduction12.

Deep learning has recently emerged as a new approach demonstrating superior performance over standard machine learning algorithms to classify neurological diseases using structural MRI data. Specifically, Convolutional Neural Networks (CNNs) can learn and encode the significant features necessary for classification and have become popular in medical image analysis13,14,15. This property makes CNNs uniquely suited to tasks like schizophrenia classification, where the specific features selected can dramatically impact model performance. Some studies have already demonstrated the utility of CNNs for schizophrenia classification. Oh J et al.16 achieved an impressive state-of-the-art performance (area under the ROC curve = 0.96) using 3D CNN for schizophrenia classification based on structural MRI data and was thus compared to as the benchmark model. Nevertheless, they struggled to generalize well on an unseen private dataset. Their inconsistent performance may be attributed to the dataset and patient variability as well as certain pre-processing choices, such as the inclusion of whole head as opposed to whole-brain MRI data and severe downsampling. Moreover, their region of interest analysis was limited and did not investigate brain structures in depth to inform specific changes in structural features associated with schizophrenia. Hu et al. combined structural and diffusion MRI scans for schizophrenia classification and found that 3D CNN models could outperform 2D pre-trained CNN models as well as multiple standard machine learning algorithms like SVM. Despite this, their best 3D model only reached the area under the ROC curve of 0.8417. As a consequence, though deep learning has advanced neuroimaging-based schizophrenia classification, the preprocessing and acquisition of large datasets coupled with the achievement of high model performance and generalization remains a great challenge.

In this study, we not only address the limitations in schizophrenia classification with T1-weighted (T1W) MRI data but also take advantage of class activation maps (CAM) in a deep learning network to visualize informative regions with disease vulnerability. Our main contributions include the following: firstly, we develop a 3D CNN using structural MRI scans to yield a performance better than the benchmark model16 for schizophrenia classification; and secondly, we apply gradient class activation maps to localize the brain regions related to schizophrenia identification. By visualizing feature activations, we provide further evidence that the structure of subcortical regions and ventricular areas1,2 are affected in schizophrenia.

Outcomes in patients who received organ transplants from donors with melanoma

 From practiceupdate.com

Here is a link.

----------------------------

In this systematic literature review, the authors sought to clarify outcomes in recipients of solid organ transplants from donors with a history of melanoma. Given the shortage of organs available for donation and the long waiting lists, it is important to elucidate the risks to recipients of accepting organs from potentially suboptimal donors such as those with a known history of malignancy.

The authors identified 181 total reported cases of melanoma in organ donors from 17 articles. However, of these, only 41 cases included recipient outcomes. In total, 75 individuals received organs from these 41 donors with melanoma, and of these, a striking 43 (63%) recipients developed melanoma after transplant. Melanoma diagnosis occurred from 3 months to 6 years after transplant, and 24 of these patients died. Follow up of the recipients who did not develop melanoma ranged from 12 months to 5 years, although 11 had no reported follow-up duration and 5 were lost to follow-up.

As expected, given the nature of this retrospective literature review, the quality of data is inconsistent and therefore, inference is limited. As the authors acknowledge, the sample includes only donors with a confirmed melanoma diagnosis. Several excluded cases were suspicious for donor tumor origin but lacked a confirmed donor melanoma diagnosis. Additionally, there was no comparator control group of recipients from donors without melanoma, in whom the rate of de novo melanoma could be compared with the study group to calculate a relative risk.

In spite of these weaknesses inherent to the study design and variable quality of available literature, this article makes an important contribution to our limited understanding of the outcomes in recipients of solid organ transplant from donors with melanoma, highlighting the substantial numbers of melanomas that do develop in recipients, which is likely an underestimate for the aforementioned reasons. Moreover, as might be expected, a substantial mortality rate was observed among recipients who developed donor-associated melanoma in the context of significant iatrogenic immunosuppression.

Importantly, the authors also point out that there are no standardized protocols to check for undiagnosed or unreported history of melanoma in candidate donors, which may help reduce the likelihood of inadvertent transfer of organs from a melanoma-affected source. This article represents a step towards a better understanding of outcomes for this population of potential organ recipients and highlights the need for further study in this arena.

More on the Pratt problem – and FAA passivity

 From Aviation Week. Article by Andy Pasztor

-----------------------------------------

Opinion: The FAA’s Safety System Is Starting to Show Cracks

The latest safety problem affecting Pratt & Whitney’s geared turbofan engines highlights the FAA’s lack of resolve confronting persistent safety challenges.

The engine troubles, following durability issues that have plagued the model for years, have sparked serious concerns among airline executives worldwide because hundreds of popular single-aisle Airbus A320neo jets face accelerated engine inspections, and some may need emergency part replacements soon. Certain carriers are girding for financial heartburn and painful schedule disruptions.

What hasn’t received enough attention, however, is the FAA’s initial limp response to these long-simmering challenges. More scrutiny also is warranted to examine its ambivalence combating a broad range of hazards, including overreliance on cockpit automation and little-known short-comings analyzing commuter and charter incidents.

The FAA had a distinctly passive reaction to heightened dangers posed by a rare but potentially catastrophic engine manufacturing defect. Pratt started to uncover the new hazards last winter but did not reveal them publicly until summer. In its surprise July disclosure, the company warned that contaminated powder metal used in certain turbine disks required stepped-up checks of many more geared turbofan engines than previously anticipated.

Before that announcement, FAA officials quietly weighed forceful moves, from ordering immediate ultrasound inspections for a segment of the fleet to temporarily grounding some jets, according to industry officials not authorized to comment publicly. But company executives pushed back hard, citing negative economic and public relations fallout.

The FAA opted against mandates, foregoing an immediate airworthiness directive—the way U.S. regulators typically handle the most pressing unsafe conditions.

Instead, FAA officials permitted Pratt to take the lead with its own voluntary inspections, despite a pattern of mistaken company predictions regarding failure modes on other engines. The agency effectively ceded control of messaging about the extent of hazards, prompting news reports that downplayed the latest dangers and used the term “recall,” usually frowned on in aviation settings.

It took another month for a high-priority FAA directive to spell out the additional risks. Without waiting for typical public comment, the FAA mandated a late-September deadline for initial inspections. Following a series of more limited mandatory directives covering geared turbofan engines, the FAA ultimately acknowledged internal cracks could emerge faster than projected, potentially causing “damage to the engine, damage to the airplane and loss of the airplane.”

The longer inspections take, the August document said, “the higher the probability of failure.” An FAA spokesman didn’t elaborate on the timing of the directive’s release. Over the years, the agency decided against strong and rapid action in other areas:

  • Years of bureaucratic disagreements delayed definitive guidance recommending that airline crews perform substantially more manual flying to reverse excessive pilot dependence on automation. It wasn’t until last fall that the FAA formally urged pilots to sometimes hand-fly “entire departure and arrival routes” or “potentially the entire flight.”
  • Agency leaders also have been reluctant to revamp some voluntary incident reporting efforts. Those programs have been immensely successful at alleviating risks pertaining to major carriers. But former FAA officials and other critics describe how agency staff shortages and reorganizations can impede effective data sharing by regional carriers and charter operators.
  • Long before the recent flurry of high-profile runway incidents, outside safety experts urged tougher action to curb spikes in midair close calls around hubs. Yielding to industry pressure, for example, the FAA over roughly a decade routinely allowed pilots to turn off critical airborne collision-avoidance warnings during specified approaches to Denver International Airport. That increased capacity on selected runways, but a drumbeat of incidents finally soured the FAA on the practice. Last August, it publicly warned of significant risks if pilots forget to turn the traffic collision avoidance warnings back on after a missed approach.
The last 18 months featured a revolving door of acting agency administrators and other interim policymakers. These officials often lacked standing to take decisive action. President Joe Biden’s latest nominee for agency chief, former FAA Deputy Administrator Michael Whitaker, was delayed by wrangling with labor.

Despite FAA missteps, industry has maintained a phenomenal safety record. Since 2009, scheduled U.S. passenger airlines have carried the equivalent of the entire world’s population—without a single fatal jetliner crash.

Lately, the FAA emphasizes that even one close call is too many. Considering Pratt’s engine woes, the agency has fallen short of that vaunted standard.

For those who have a fear of flying: Pratt engine problem

 Here is a link to the article.

Here are some excerpts.

-----------------------------------------------

Some 3,000 engines, including PW1000Gs of all types and IAE V2500s, built from mid-2015 through mid-2021, may have parts with contaminated powder metal (PM). Cracking from PM contamination has been found in high-pressure turbine (HPT) Stage 1 and 2 disks, or hubs, installed in the motors. Pratt is also inspecting some high-pressure compressor (HPC) disks built at the same time, RTX revealed. Most of the affected engines are PW1100Gs found on A320neo-family aircraft.

Clogged engine overhaul shops and a fast-tracking of necessary inspections on higher-time PW1100G GTFs will likely drive repair turnaround times to up to as many as 300 days per engine and could ground 650 Airbus A320neos at one time early next year, RTX disclosed. According to Aviation Week Network’s Fleet Discovery database, 1,354 Pratt-powered A320neo-family aircraft are currently in service, parked, stored or in parked/reserve status.

Fleet groundings will “average” 350 at any given time through 2026, according to RTX President and Chief Operating Officer Chris Calio.
-----------
In July, Pratt had announced that previous PW1100G parts inspection intervals, developed after the problem was first uncovered in 2020, were not aggressive enough to flag cracks that the contamination can cause. It said as many as 1,200 engines would need to be pulled in the next year, including up to 200 by Oct. 1. Some of the checks would overlap with scheduled shop visits, reducing unplanned disruptions and costs.

The revised figures now lower the number of engines that need immediate attention but narrow the removal window. The result is higher costs for Pratt and its PW1000G partners as its already full overhaul network faces a wave of engines that require extensive work scopes.

Friday, September 08, 2023

Smart people can exhibit really stupid behavior when emotions override intellect

 Jonathan Turley gets right about Laurence Tribe.

Tribe is a Harvard professor, so it is reasonable to assume that he has impressive hardware (basic smarts). But as Turley points out, Tribe's software is even worse than Microsoft's, i.e., the intellectual component is dominated by the emotional component.

But the message is not just about Tribe. It is that smart people can be really stupid.

Here is Turley's comment.

-------------------------------------------

Ragefully Wrong: A Response to Professor Laurence Tribe

Below is my column in the New York Post in response to the attack this week by Harvard Professor Laurence Tribe. I am honestly saddened by the ad hominem attacks that have become common place with many academics like Tribe. There was a time when legal disagreements could be passionate but not personal. The use of personal insults and vulgar trash talking were avoided in our profession. Now even law deans have called Supreme Court justices “hacks” to the delight of their followers. I have always said that there are good-faith arguments on both sides of the 14th Amendment theory despite my strong disagreement with the theory. The public would benefit from that debate based on precedent rather than personalities.

Here is the column:

This week, CNN’s “Erin Burnett OutFront” offered what has become a staple of liberal cable news: Harvard law professor Laurence Tribe assuring Democrats that they are justified in an unconstitutional effort while attacking opposing views as “nonsense.”

I was singled out on this occasion for Tribe’s latest personal attack because I voiced a legal opinion different from his own.

Being attacked by Tribe as a “hack” is not as much of a distinction as one might expect.

Indeed, it is relatively tame in comparison to Tribe’s past vulgar and juvenile assaults on others.

Tribe has attacked figures like Mitch McConnell as “McTurtle” and “flagrant d**khead.”

He attacked former Attorney General Bill Barr’s religion and thrills his followers by referring to Trump as a “Dick” or “dickhead in chief.”

Tribe often shows little patience for the niceties of constitutional law or tradition.

He has supported the call for packing the Supreme Court as long overdue.

He has also supported an array of debunked conspiracy theories like denouncing Barr as guilty of the “monstrous” act of shooting protesters in Lafayette Park with rubber bullets to make way for a photo op — a claim found to be utterly untrue.

Some of Tribe’s conspiracy theories are quickly disproven — like his sensational claims of an anti-Trump figure being killed in Russia.

Nevertheless, Tribe remains the “break the glass” academic for Democratic leaders when political expedience requires a patina of constitutional legitimacy.

I have long disagreed with Tribe over his strikingly convenient interpretations of the Constitution.

We crossed swords decades ago during the impeachment of Bill Clinton, when Tribe argued that it was not an impeachable offense for Clinton to lie under oath.

Even though a federal court and even Democrats admitted that Clinton committed the crime of perjury, Tribe assured Democrats that it fell entirely outside of the constitutional standard of a high crime and misdemeanor.

However, Tribe would later say that Trump’s call to Ukraine was clearly and undeniably impeachable.

Indeed, Tribe insisted that Trump could be charged with a long list of criminal charges that no prosecutor ever pursued — including treason.

Tribe even declared Trump guilty of the attempted murder of Vice President Mike Pence on January 6, 2021.

Even though no prosecutor has ever suggested such a charge, Tribe assured CNN that the crime was already established “without any doubt, beyond a reasonable doubt, beyond any doubt.”

That is the key to Tribe’s appeal: the absence of doubt.

Every constitutional road seems to inevitably lead to where Democrats want to go — from court packing to unilateral executive action.

Take student loan forgiveness.

Even former Speaker Nancy Pelosi acknowledged that the effort to wipe out hundreds of millions of dollars of student loans would be clearly unconstitutional.

However, Tribe assured President Biden that it was entirely legal.

It was later found unconstitutional by the Supreme Court.

Tribe was also there to support Biden — when no other legal expert was — on the national eviction moratorium.

The problem, Biden admitted, was his own lawyers told him that it would be flagrantly unconstitutional.

That is when then-Speaker Nancy Pelosi gave Biden the familiar advice: Just call Tribe.

Biden then cited Tribe as assuring him that he had the authority to act alone.

It was, of course, then quickly found to be unconstitutional.

Even Democratic laws that were treated as laughable were found lawful by Tribe.

For example, the “Resistance” in California passed a clearly unconstitutional law that would have barred presidential candidates from appearing on the state ballots without disclosing tax records.

Tribe heralded the law as clearly constitutional and lambasted law professors stating the obvious that it would be struck down.

It was not just struck down by the California Supreme Court but struck down unanimously.

Likewise, California Governor Gavin Newsom pushed for the passage of an anti-gun rights law that was used to mock the holding of the Supreme Court’s abortion ruling in Dobbs.

Yet Tribe declared the effort as inspired and attacked those of us who stated that it was a political stunt that would be found legally invalid.

It was quickly enjoined by a court as unconstitutional.

In an age of rage, the most irate reigns supreme.

And there is no one who brings greater righteous anger than Laurence Tribe.

That is evident in arguably the most dangerous theory now being pushed by Tribe — and the source of his latest attack on me.

Democrats are pushing a new interpretation of the 14th Amendment that would allow state officials to bar Trump from the ballots — preventing citizens from voting for the candidate now tied with Joe Biden for 2024 election.

This is all being argued by Tribe and others as “protecting democracy,” by blocking a democratic vote.

Democrats have claimed that the 14th Amendment prevents Trump from running because he supported an “insurrection or rebellion.”

They have argued that this long dormant clause can be used to block not just Trump but 120 Republicans in Congress from running for office.

I have long rejected this theory as contrary to the text and history of the 14th Amendment.

Even figures attacked (wrongly) by Trump, such as Georgia Secretary of State Brad Raffensperger, have denounced this theory as dangerous and wrong.

Tribe was set off in his latest CNN interview after I noted that this theory lacks any limiting principle.

Advocates are suggesting that courts could then start banning candidates by interpreting riots as insurrections.

After I noted that the amendment was ratified after an actual rebellion where hundreds of thousands died, Tribe declared such comparisons “nonsense.”

He asked “how many have to die before we enforce this? There were several who died at the Capitol during the insurrection.”

My comment was not to do a head count, but to note that (since Tribe believes that there is no need for a congressional vote) one would at least expect a charge of rebellion or insurrection by Trump.

Yet Trump was not even been charged with incitement.

Not even Special Counsel Jack Smith has charged him with incitement in his two indictments.

The 14th Amendment theory is the perfect vehicle for the age of rage and Tribe, again, has supplied the perfect rage-filled analysis to support it.

The merits matter little in these times.

You can be wrong so long as you are righteously and outrageously wrong.

Monday, September 04, 2023

Sugar and liver cancer and chronic liver disease

From the Journal of the American Medical Association.

Here is a link to the paper.

Here is a summary.

----------------------------------------

Sugar-Sweetened and Artificially Sweetened Beverages and Risk of Liver Cancer and Chronic Liver Disease Mortality
Key Points

Question

Is greater intake of sugar-sweetened beverages associated with greater risk of liver cancer or chronic liver disease mortality?

Findings

In 98 786 postmenopausal women followed up for a median of 20.9 years, compared with consuming 3 servings or less of sugar-sweetened beverages per month, women consuming 1 or more servings per day had significantly higher rates of liver cancer (18.0 vs 10.3 per 100 000 person-years; adjusted hazard ratio [HR], 1.85) and chronic liver disease mortality (17.7 vs 7.1 per 100 000 person-years; adjusted HR, 1.68).

Meaning 

Compared with 3 or fewer sugar-sweetened beverages per month, consuming 1 or more sugar-sweetened beverages per day was associated with a significantly higher incidence of liver cancer and death from chronic liver diseases.
Abstract

Importance 

Approximately 65% of adults in the US consume sugar-sweetened beverages daily.

Objective 

To study the associations between intake of sugar-sweetened beverages, artificially sweetened beverages, and incidence of liver cancer and chronic liver disease mortality.

Design, Setting, and Participants

A prospective cohort with 98 786 postmenopausal women aged 50 to 79 years enrolled in the Women’s Health Initiative from 1993 to 1998 at 40 clinical centers in the US and were followed up to March 1, 2020.

Exposures 

Sugar-sweetened beverage intake was assessed based on a food frequency questionnaire administered at baseline and defined as the sum of regular soft drinks and fruit drinks (not including fruit juice); artificially sweetened beverage intake was measured at 3-year follow-up.

Main Outcomes and Measures 

The primary outcomes were (1) liver cancer incidence, and (2) mortality due to chronic liver disease, defined as death from nonalcoholic fatty liver disease, liver fibrosis, cirrhosis, alcoholic liver diseases, and chronic hepatitis. Cox proportional hazards regression models were used to estimate multivariable hazard ratios (HRs) and 95% CIs for liver cancer incidence and for chronic liver disease mortality, adjusting for potential confounders including demographics and lifestyle factors.

Results 

During a median follow-up of 20.9 years, 207 women developed liver cancer and 148 died from chronic liver disease. At baseline, 6.8% of women consumed 1 or more sugar-sweetened beverage servings per day, and 13.1% consumed 1 or more artificially sweetened beverage servings per day at 3-year follow-up. Compared with intake of 3 or fewer servings of sugar-sweetened beverages per month, those who consumed 1 or more servings per day had a significantly higher risk of liver cancer (18.0 vs 10.3 per 100 000 person-years [P value for trend = .02]; adjusted HR, 1.85 [95% CI, 1.16-2.96]; P = .01) and chronic liver disease mortality (17.7 vs 7.1 per 100 000 person-years [P value for trend <.001]; adjusted HR, 1.68 [95% CI, 1.03-2.75]; P = .04). Compared with intake of 3 or fewer artificially sweetened beverages per month, individuals who consumed 1 or more artificially sweetened beverages per day did not have significantly increased incidence of liver cancer (11.8 vs 10.2 per 100 000 person-years [P value for trend = .70]; adjusted HR, 1.17 [95% CI, 0.70-1.94]; P = .55) or chronic liver disease mortality (7.1 vs 5.3 per 100 000 person-years [P value for trend = .32]; adjusted HR, 0.95 [95% CI, 0.49-1.84]; P = .88).

Conclusions and Relevance 

In postmenopausal women, compared with consuming 3 or fewer servings of sugar-sweetened beverages per month, those who consumed 1 or more sugar-sweetened beverages per day had a higher incidence of liver cancer and death from chronic liver disease. Future studies should confirm these findings and identify the biological pathways of these associations.