WW II Key Initiatives 2: “Thatch Weave” Tactic to Counter More-Agile Japanese Fighter Planes

This is the second of a series of occasional posts on observations of how some individual initiatives made strategic impacts on World War II operations and outcome.  While there were innumerable acts of initiative and heroism that occurred during this conflict, I will focus on actions that shifted the entire capabilities of their side.

It’s the summer of 1941. The war in Europe between mainly Germany and Britain had been grinding on for around two years, with Hitler in control of nearly all of Europe. The Germans then attacked the Soviet Union, and quickly conquered enormous stretches of territory. It looked like the Nazis were winning. Relations with Japan, which aimed to take over the eastern Pacific region were uneasy. The Japanese had already conquered Korea and coastal China, and were eyeing the resource-rich lands of Southeast Asia and Indonesia. It was a tense time.

The Japanese military had been building up for decades, preparing for a war with the United States for control of the eastern Pacific. They developed cutting edge military hardware, including the world’s biggest battleships, superior torpedoes and a large, well-trained aircraft carrier force. They also produced a new fighter plane, dubbed the “Zero” by Western observers.

Intelligence reports started to trickle in that the Zero was incredibly agile: it could outrun and out-climb and out-turn anything the U.S. could put in the air, and it packed a wallop with twin machine cannons. Its designers achieved this performance with a modestly-powered engine by making the airframe supremely light.

As I understand it, the U.S. military establishment’s response to this intel was fairly anemic. It was such awful news, that seemingly they buried their heads in the sand and just hoped it wasn’t true. Why was this so disastrous? Well, since the days of the Red Baron in World War I, the way you shot down your opponent in a dogfight was to turn in a narrower circle than him, or climb faster and roll, to get behind him. Get him in your gunsights, burst of incendiary machine-gun bullets to ignite his gasoline fuel tanks, and down he goes. If the Zero really was that agile, then it could easily shoot down any U.S. plane with impunity. Even if you started to line up behind a Zero for a shot, he could execute a tight turning maneuver, and end up on your tail, every time. Ouch.

A U.S. Navy aviator named John Thatch from Pine Bluff, Arkansas did take these reports on the Zero seriously. He racked his brains, trying to figure out a way for the clunky American Wildcat fighters to take on the Zeros. He knew the American pilots were well-trained and were good shots, if only they could get some crucial four-second (?) windows of time to line up on the enemy planes.

So, he spent night after night that summer, using matchsticks on his kitchen table, trying to invent tactics that would neutralize the advantages of the Japanese fighters. He found that the standard three-plane section (one leader, two wingmen) was too clumsy for rapid maneuvering. He settled on having two sections of two planes each.   The two sections would fly parallel, several hundred yards apart. If one section got attacked, the two sections would immediately make sharp turns towards each other, and cross paths. The planes of the non-attacked section could then take a head-on shot at the enemy plane(s) that were tailing the attacked section.

Here is a diagram of how this works:

Source: U. S. Naval Institute

The blue planes are the good guys, with a section on the left and on the right. At the bottom of the diagram, an enemy plane (green) gets on the tail of a blue plane on the right. The left and the right blue sections then make sudden 90 degree turns towards one another. The green plane follows his target around the turn, whereupon he is suddenly face-to-face with a plane from the other section, which (rat-a-tat-tat) shoots him down. In a head-to-head shootout, the Wildcat was likely to prevail, since it was more substantial than the flimsy Zero. Afterwards, the two sections continue flying parallel, ready to repeat the maneuver if attacked again. And of course, they don’t just fly along hoping to be attacked, they can make offensive runs at enemy planes as well, as a unified formation. This technique was later dubbed the “Thatch weave”.

Thatch faced opposition to his unorthodox tactics from the legendary inertia of the pre-war U.S. military establishment. Finally, he and his trained team submitted to a test: their four-plane formation went into mock combat against another four planes (all Wildcats), but his planes had their throttles restricted to maximum half power. Normally that would have made them toast, but in fact, with their weaving, they frustrated every attempt of the other planes to line up on them. This demonstration won over many of the actual pilots in the carrier air force, though the brass on the whole did not endorse it.

By some measures the most pivotal battle in the Pacific was the battle of Midway in June, 1942. The Japanese planned to wipe out the American carrier force by luring them into battle with a huge Japanese fleet assembled to invade the American-held island of Midway. If they had succeeded, WWII would have been much harder for the U.S. and its allies to win.

The way that battle unfolded, the U.S. carriers launched their torpedo planes well before their dive bombers. The Japanese probably feared the torpedo planes the most, and so they focused their Zeros on them. Effectively only Thatch and two other of his Wildcats were the only American fighter protection for the slow, poorly-armored torpedo bombers by the time they got to their targets. Using his weave maneuver for the first time in combat, he managed to shoot down three Zeros while not getting shot down himself. This vigorous, unexpectedly effective defense by a handful of Wildcats crucially helped to divert the Japanese fighters and kept them at low altitudes, just in time for the American dive bombers to arrive and attack unmolested from high altitude.

In the end, four Japanese fleet carriers were sunk by the dive-bombers at Midway, at a cost of one U.S. carrier. That victory helped the U.S. to hang on in the Pacific until its new carriers started arriving in 1943. Thatch’s tactic made a material difference in that battle, and was quickly promulgated throughout the rest of the U.S. carrier force. It was not a complete panacea, of course, since the once the enemy knew what you were about to do, they might be able to counter it. However, it did give U.S. fighters a crucial tool for confronting a more-agile opponent, at a critical time in the war. Thatch went on to train other pilots, and eventually became an admiral in the U.S. Navy.

Source: Wikipedia

Why public universities should not accept the Trump compact

Universities continue to turn down the “Trump Compact”. The intitial nine schools targeted with an “invitation” were from a seemingly curated list of elite institutions, though some are perhaps notably less wealthy or more aspirational than the others. I can’t help but think there was some attempt to create a prisoner’s dilemma situation, where one more eager or fearful university might start a domino effect by committing first. That has not occurred.

What I do expect at some point in the coming weeks is a broadened offering of the compact to schools across the country. I expect messaging that specifically targets large public universities in states with Republican-controlled state legislatures that will be leveraged to pressure schools to sign on to the compact in hopes of currying favor with the administration and their voter base. I expect several schools to sign.

Here’s why I think that would be a grave mistake.

The compact comes with promises of “most-favored” status for applicants to federal grants through institutions such as the NIH, NSF, and Department of Defense. The thing is, they can promise that all they want. They don’t actually have that much influence over the review process. They’ll no doubt work to tip the scales on a few grants and promote them heavily, but the media coverage will vastly outweigh the dollars being shifted by the compact. It will, as always, be theater first and governance last.

But let’s say your school does procure several grants. Perhaps you’re a school that has in the past carried $20 to $30 million in active grants from the NIH and NSF, amounting to roughly $5 million per year in operating expenses. That sounds like a lot, but it’s not. Johns Hopkins University, by comparison, had $843 million in just NIH grants active in 2023. If you’re operating with $5 million a year in grant money, you have an office of sponsored projects, an Internal Review Board for human subjects research, and maybe an office for industry sponsorship. That maybe amounts to 15 to 20 personnel. What happens if the Trump administration comes through, putting its thumb on the scale for you, doubling or tripling your active grants within two years?

Chaos. Institutional chaos.

Sponsored research requires capital, personnel, and resource management. It requires legal compliance, doubly so if you’re spending federal money. It requires experienced leadership and management that know how to check boxes, file reports, track money, review protocols, and continuously train ever-churning research personnel.

But hey, that’s the point, you might be saying. We want to be ambitious and grown, we want to hire new and experienced personnel. We want to grow into an important research institution and this is our big chance! Be careful what you wish for. It’s one thing to incrementally grow over years and decades. It’s a whole other thing to try to do it in reaction to a sudden influx of money. Which, to be clear, isn’t just money. It’s an obligation. An expectation to produce scientific contributions on the US taxpayers’ dime. Obligations come with many things, but patience with incompetence borne of growing pains isn’t one of them.

But none of that is the problem. The real problem. The trap.

The trap is that this money isn’t going to stick around. This regime isn’t permanent. They aren’t invested in any way in scientific public goods or even science as a conept. This is, again, theater. They will move on to other things the instant it fails to the get the traction they want. They will lose elections, political tides will turn, etc. And what your institution will be left with is the reputation you earned.

And what will that reputation be? One of compliance with an anti-science, anti-public health, anti-intellectual regime. Further, you will judged on the fruits of that compliance. At the margin, it will be science that was undersupported, delayed in launch, stalled in execution, and eventually delivered short of expectations. You will have sold your reputation for a ticket on a ride you weren’t tall enough to be on yet. Grants will dry up, returning to previous levels or worse, leaving you with a bloated staff you no longer need, trying to find ways to lay off employees with all the protections of state government labor regulations.

There is no getting rich quick in academic research. There’s only avenues of over-reaching impatience ending in tears.

Are Imports Bad for GDP?

A periodically recurring conversation on social media is whether imports are bad for GDP. Everyone thinks they are clearly right, and then they lazily defer to brief dismissal of the opposing view. Some of this might be due to media format. Something just a tiny bit more thorough could help to resolve the painfully unproductive online interactions… And just maybe improve understanding.  

It starts with the GDP expenditure identity:

The initial assertion is that imports reduce GDP. After all, M enters the equation negatively. So, all else constant, an increase in M reduces Y. It’s plain and simple.

Many economists reply that the equation is an accounting identity and not a theory about how the world works and that the above logic is simply confusing these two things. This reply 1) allows its employers to feel smart, 2) doesn’t address the assertion, & 3) doesn’t resolve anything. In fact, this reply erects a wall of academic distinction that prevents a resolution. What a missed opportunity to perform the literal job of “public intellectual”.

How are Imports Bad/Good/Irrelevant for GDP?

Let’s add a small but important detail to the above equation to distinguish between consumption of goods produced domestically and those produced elsewhere.

Continue reading

A Better Man / A Better Woman

There are 62 songs called “Better Man” just on Ultimate Guitar (which doesn’t claim to be comprehensive), plus many more slight variations like “A Better Man” or “Better Man Blues”. Some of these are obscure, but many are from well-known artists including Taylor Swift, Oasis, Ellie Goulding, Justin Bieber, and Pearl Jam; one by Robbie Williams inspired a major motion picture also called Better Man.

Meanwhile there is only one song on Ultimate Guitar called “Better Woman”, plus one variation (“A Better Woman”), both from artists I hadn’t heard of (Sera Cahoone and Beccy Cole). Why such an extreme difference?

Is it that men are the ones who are terrible and need improvement? Or are men the ones who see hope for improvement, while women can’t change or don’t want to? Let’s consider what the lyrics have to say about this. Reading though them all I saw a few recurring categories of “Better Man”:

Wish I Were Better: I count 33 of the 62 songs in this category. A man singing about how he wishes he were better, usually because of a woman, the classic “You Make Me Want to Be a Better Man“. Sometimes this is hopeful that he will be, sometimes regretful that he hasn’t been or despairing that he won’t be. Occasionally the inspiration to be better comes from someone other than a woman he’s in love with, such as Jesus, his dad, or his kids.

You Make Me Better: 13/62. Same idea as the last category, except the man has already become better. Again usually because of a woman, but sometimes because of someone else like God or his kids or his friends. Another 3 are a variation of this, I Got Better, where the man changed without anyone’s help or for a woman who isn’t convinced he really changed.

Wish You Were a Better Man: 4/62, but includes the hit by Taylor Swift. A woman wishes a man she loved were better. Another 2 songs including the Pearl Jam hit are a variant of this, Can’t Find A Better Man, where a woman stays with a bad man because she doesn’t see a better choice. Steven Seagal (yes, that Steven Seagal) reverses things and sings that a woman should leave him because she can do better. Then there’s 1 example of the genre where Hellyeah wishes his father were a better man.

One-offs: There are a few 1-off “Better Man” songs that seem to be in a category of their own: Beth Hart’s celebration of finding a better man, Ellie Goulding‘s odd insistence that “I’m the better man” (even though she’s a woman), and Ryan Innes’ entry which is the closest anyone comes to saying they wish they were a worse man. By the way, there appear to be zero songs out there called “Worse Man”- perhaps some day I’ll write one, but its a free idea and I’d be happy to see one of you beat me to it.

What about our 2 “Better Women”? Sera Cahoone’s song (the only one with the exact title “Better Woman”) is a standard “Wish I Were Better” entry, just as a woman (though the person she wants to be better for might still be a woman as usual):

So I step on up and be a better woman in your eyes
From now on I’m gonna love everything about you

Beccy Cole’s “A Better Woman” concludes that she doesn’t actually want or need to become a better woman:

I ain’t changin’ nothin’
Just to have your lovin’
Yeah, I’m alright with who I am
I don’t need to be a better woman – I just need a better man

The boring explanation for the gender discrepancy is that “Better Man” just scans better rhythmically. But I don’t think can explain a 60-2 (or 60-1 if we’re being strict) difference, and there seems the be a big underlying difference in the prevalence of these themes for men and women, not just titles. This matches up with the classic sayings from Camille Paglia:

A woman simply is, but a man must become

Or this one often attributed (probably incorrectly) to Einstein:

Women marry hoping that the man will change. Men marry hoping the woman will stay the same. Both are usually disappointed.

Whatever the cause, you can find the playlist I made of all 60 “Better Man” songs I could find on Youtube Music here:

I liked most of them (surprisingly given the range of genres and the fact that I hadn’t heard of most of the artists), but my favorite in this vein is to forget being a Better Man or Better Woman, and instead be “A Better Son/Daughter” like Rilo Kiley says:

The Toyota Camry is Much More Affordable Than 30 Years Ago

The following chart from Arbor Research shows that the average age of cars on the road in the US is 14.5 years. If we go back to 1995, it was almost half that, and the increase has been steady since over the past 30 years. Similar data from the Bureau of Transportation Statistics confirms these numbers.

Why would this be? I see two primary explanations that are possible. One is that cars are becoming more reliable (better quality), so consumers are happy to drive them longer. The other is that cars today are less affordable, so people are only hanging onto old cars because they are forced to. One of these is a happy explanation, one is consistent with a narrative of stagnation. Which is true?

I am not a car expert, so I can’t speak to the first, though I will note that there are Facebook groups dedicated to people that have cars with hundreds of thousands of miles on their odometers.

On the affordability question, we do have some good data, but it points in the opposite direction: cars are much more affordable today than in 1995, or even before that.

Continue reading

Circular AI Deals Reminiscent of Disastrous Dot.Com Vendor Financing of the 1990s

Hey look, I just found a way to get infinite free electric power:

This sort of extension-cord-plugged-into-itself meme has shown up recently on the web to characterize a spate of circular financing deals in the AI space, largely involving OpenAI (parent of ChatGPT). Here is a graphic from Bloomberg which summarizes some of these activities:

Nvidia, which makes LOTS of money selling near-monopoly, in-demand GPU chips, has made investing commitments in customers or customers of their customers. Notably, Nvidia will invest up to $100 billion in Open AI, in order to help OpenAI increase their compute power. OpenAI in turn inked a $300 billion deal with Oracle, for building more data centers filled with Nvidia chips.  Such deals will certainly boost the sales of their chips (and make Nvidia even more money), but they also raise a number of concerns.

First, they make it seem like there is more demand for AI than there actually is. Short seller Jim Chanos recently asked, “[Don’t] you think it’s a bit odd that when the narrative is ‘demand for compute is infinite’, the sellers keep subsidizing the buyers?” To some extent, all this churn is just Nvidia recycling its own money, as opposed to new value being created.

Second, analysts point to the destabilizing effect of these sorts of “vendor financing” arrangements. Towards the end of the great dot.com boom in the late 1990’s, hardware vendors like Cisco were making gobs of money selling server capacity to internet service providers (ISPs). In order to help the ISPs build out even faster (and purchase even more Cisco hardware), Cisco loaned money to the ISPs. But when that boom busted, and the huge overbuild in internet capacity became (to everyone’s horror) apparent, the ISPs could not pay back those loans. QQQ lost 70% of its value. Twenty-five years later, Cisco stock price has never recovered its 2000 high.

Beside taking in cash investments, OpenAI is borrowing heavily to buy its compute capacity. Since OpenAI makes no money now (and in fact loses billions a year), and (like other AI ventures) will likely not make any money for several more years, and it is locked in competition with other deep-pocketed AI ventures, there is the possibility that it could pull down the whole house of cards, as happened in 2000.  Bernstein analyst Stacy Rasgon recently wrote, “[OpenAI CEO Sam Altman] has the power to crash the global economy for a decade or take us all to the promised land, and right now we don’t know which is in the cards.”

For the moment, nothing seems set to stop the tidal wave of spending on AI capabilities. Big tech is flush with cash, and is plowing it into data centers and program development. Everyone is starry-eyed with the enormous potential of AI to change, well, EVERYTHING (shades of 1999).

The financial incentives are gigantic. Big tech got big by establishing quasi-monopolies on services that consumers and businesses consider must-haves. (It is the quasi-monopoly aspect that enables the high profit margins).  And it is essential to establish dominance early on. Anyone can develop a word processor or spreadsheet that does what Word or Excel do, or a search engine that does what Google does, but Microsoft and Google got there first, and preferences are sticky. So, the big guys are spending wildly, as they salivate at the prospect of having the One AI to Rule Them All.

Even apart from achieving some new monopoly, the trillions of dollars spent on data center buildout are hoped to pay out one way or the other: “The data-center boom would become the foundation of the next tech cycle, letting Amazon, Microsoft, Google, and others rent out intelligence the way they rent cloud storage now. AI agents and custom models could form the basis of steady, high-margin subscription products.”

However, if in 2-3 years it turns out that actual monetization of AI continues to be elusive, as seems quite possible, there could be a Wile E. Coyote moment in the markets:

MapGDP to teach economic growth

Economist Craig Paulsson has made a simple game free to all.

When you go to MapGDP.com you will find a real picture from Google Maps and a simple question. Guess the GDP/capita in the country where this picture was taken.

Watch his YouTube introduction

See Craig’s announcement about the game on his Substack

Many economics teachers will at some point visit the topic of “what is GDP” or “economic growth.” This web game is great for both topics. I put the website on my classroom projector and called on students to take the guess. We then could do the reveal together. I rate this high value for low effort from a teacher’s perspective.  No login or account creation required.

If you are an EWED reader and not an econ teacher, you might have fun playing the game yourself. Almost as satisfying as Wordle…

All of the Prices

Today I’m just sharing a truly awe-inspiring resource. The University of Missouri has what is essentially a central clearinghouse for prices and wages. If you want the price of anything, then they should be your first stop.

See the screenshot at the bottom. The website links to the original sources for household consumption prices, occupation wages, etc. They make it easy to cut the data by date, industry, location, etc. Because they cite their sources, you can see some data series that are not even available on FRED – without having to perform the painful sleuthing on a government website.

I especially like this site for its historical data. One of the challenges of historical US data is that individual cities may not have prices that are representative of the national levels or trends. Lower levels of market integration make representative samples even more important than in modern data. But really, that was more of a concern for 20th century researchers. Now, we love our panel data. So, the historically less integrated markets of the US provide ‘toy economies’ that include greater regionalism and local shocks.

Although David Jacks has loads of tabulated data, he doesn’t have it all. The Missouri library site links to PDFs of original statistical publications which, while digitized, have never been tabulated into useable data fit for modern researchers.

Go have a look around. You won’t regret it.

https://libraryguides.missouri.edu/pricesandwages/1870-1879

Triumph of the Data Hoarders 2: The Institutions

Datasets can be pulled offline for all sorts of reasons. As I wrote in February, this shows the value of being a data hoarder– just downloading now any data you think you might want later:

Several major datasets produced by the federal government went offline this week…. This serves as a reminder of the value of redundancy- keeping datasets on multiple sites as well as in local storage. Because you never really know when one site will go down- whether due to ideological changes, mistakes, natural disasters, or key personnel moving on.

The US Federal government shutdown this month provides another reminder of this. So far most datasets are still up, but I’ve seen some availability issues:

The good news is that a number of institutions have stepped up in 2025 to host at-risk datasets (joining those like IPUMSNBER, and Archive.org that have been hosting datasets for many years, but are scaling up to meet the moment):

  • Restore CDC hosts all CDC data as it was in January 2025.
  • The Data Rescue Project provides tools and suggestions for how other institutions can save data at scale, plus links to other projects.