Was “World War II” Just a Myth?

May 5, 2415

[To:] Mark Livingstone,

25 The Standards,

Verneville, Alassippi

Dear Mark:
in your last letter you made one palpable hit, but only one: I admit that the atomic wars of the Twenty-first Century and the cataclysms of the Twenty-second Century destroyed so much of our cultural inheritance, including nearly all our Nineteenth and Twentieth Century history, that there is very little we can turn to of those times that is authentic. Apparently that is the only point we will be able to agree on.

I cannot possibly believe, for instance, as you do, that there ever did exist an Abraham Lincoln as so glowingly portrayed by our two or three surviving “history” digests; nor can I believe there ever was a World War II, at least such as they described. Wars, yes – there have always been wars, and a World War II may have occurred – but certainly not with such incredible concomitants.

In short, your history is much too fictional for me.

Continue reading

Active empathy makes for better research

There are skills necessary for good research and policy design, but not all of them can be taught. One of the skills I advocate that my students develop, but to be honest I’m not sure if I’m all that convincing, is active empathy i.e. to willfully try to place yourselves in the context that is driving the model underlying your research question and imagine how you would behave. This is, perhaps, more work than it sounds.

Trying to imagine how you would behave in a given decision context requires not just imagining how you would make the best possible decision, but what you might actually do. This means imagining your own hypothetical state of mind in the model event context. How tired you might be, how frustrated or bored or scared. How invested you are cognitively or how distracted from the entire enterprise. Would you even be conscious of the decision in the moment you were making it, or would you only realize it upon later reflection?

What would your resource constraints be and what would it feel like to live under those constraints? What sort of rewards or punishments are you considering? This is where it pays to be honest with both your current and hypothetical selves. If you’re a car salesman, are you more excited about making the most money or being the best salesperson in the lot? If you’re a cop, are you more excited about making a big arrest or making it through the day with the minimum of interactions? Do you care more about your boss liking you or your fellow street officers?

This also, more often than not, means imaging you are a completely different person. This is where it is strongly advisable to practice not just active empathy, but active humility. I like to think I am pretty good at putting myself in other people’s shoes, but I also know I will never be able to fully empathize with the experience of being a woman in an abusive domestic context with two young children during a global pandemic. What I can do, however, is start by actively empathizing with the elements of that context that are accessible to me and my life experience, and then do my best to add into the exercise the different constraints, outside options, and resources available that might change the decisions made. I can enrich the mental model I am building by trying to appreciate what it means to make decisions, in any context, under the duress of physical fear and heightened uncertainty, while all the while acknowledging my exercise is inherently limited by my own experience.

Having invested real time and energy in this exercise, you’ll be in a better place to guide your research and policy design, not just because you’re thinking about the problem from the ground level, but because you’ve forced yourself to acknowledge where your blind spots are, and can do your best to address them. First person narrative accounts (“anecdotes”) don’t usually make for great data, but they are great way to let someone else’s experience to partially (but never fully) fill in your gaps. To be clear, I don’t view this as an alternative to standard rational choice frameworks of analysis. Quite the contrary, I think it exactly when the choices being made by others seem entirely irrational that it is most advisable to step back and try to actively empathize with the decisionmaker– to try to see the choices, constraints, and other players in the game as they actually see them. It’s amazing what can quickly become completely rational once you consider in resource constraints, especially information constraints, people are operating within.

If it sounds like I’m trying to convince economists everywhere of the merits of Method Acting, don’t worry, I’m not.

No, scratch that. That’s exactly what I’m doing. Just keep your rehearsals to yourself.

Hyperinflationary Efficiency?

I’m advising a senior thesis for a student who is examining the strength of Purchasing Power Parity in hyper-inflationary countries. Beautifully, the results are consistent with another author* who uses a more sophisticated method.

For those who don’t know, absolute purchasing power parity (PPP) depends on arbitrage among traders to cause a unit of currency to have the same ability to acquire goods in two different countries. If after converting your currency you can afford more stuff in foreign country, then there is a profit opportunity to purchase there and even to re-sell it in your home country.

Essentially, when you make that decision, you are reducing demand for the good in your home country and increasing demand in the foreign country (re-selling affects the domestic supply too). Eventually, the changes in demand cause the prices to converge and the arbitrage opportunities disappear. At this point the two currencies are said to have purchasing power parity – it doesn’t matter where you purchase the good.

So does PPP hold? One way that economists measure the strength of PPP is by measuring the time that it takes for a typical purchasing power difference to be arbitraged away by 50% – its ‘half-life’.  The more time that is required, the less efficient the markets are said to be.

The ex-ante question is: Is PPP be stronger or weaker during hyperinflationary periods?

Continue reading

I could do better

My favorite soccer team has been badly coached for 2 years and I am regularly convinced I could do better.

These are not the thoughts of a rational man and its causing me no small amount of consternation, bordering on intellectual crisis. Which is, of course, a lie, but adding a touch of intellectual melodrama never hurts when you’re trying your damnedest to write something new every week.

It is a puzzle, to be sure. There have been two coaches in the last two years, the second having only been there a week. The first was experienced, accomplished, and internationally famous. I’m quite confident he was wrong in the majority of decisions he made, but I at least had a model for why he was so often wrong.

When an ostensible expert appears to be failing at their job far worse than a hypothetically cheaper replacement, I always look for the rational reason why someone might be choosing to fail. In this case, we were observing an individual who could achieve mediocrity without effort. His past accomplishments gave him credibility with the players and his stock of knowledge as of 2011 was sufficient to carry him to large pay checks. To achieve mediocrity required near minimal effort. Could he update his tactics, both within the structure of the game and his management of personnel? Of course. But doing so would require enormous amounts of effort. His salary had peaked, his future managerial prospects dimmed by age and recent results, and as such the returns to effort were dwarfed by the returns to leisure. Allow me to enter ego into the calculus. What sounds more cognitively costly: acquiescing to reality that your human capital has been rendered obsolete and rebuilding your modus operandi from scratch with the full knowledge that you may spend your wealth-laden golden years failing in public? Or denying it fully, shifting all blame for failure onto the personnel, and bemoaning that it is not your human capital that is obsolete, but rather that the labor pool available to you is fundamentally flawed? To me its a no-brainer, and it’s why I am fully of the belief that there actually are bloggers in their mom‘s basement who could have better managed a team.

The new manager is a temp. He’s never managed a team before. Then again, neither have I. He has, however, played professional soccer at the highest level. He has been placed on the management training track by a world-class organization. He has none of the maladapted human capital or rational-addiction-adjacent reasons to fail at his job. He has all of the local and tacit knowledge from being on the training pitch and in the locker room that I don’t.

I’m still confident I could have done a better job than he did today. Why is that?

I can construct a model to rationalize my beliefs, but that model gets awfully “just-so” very quickly. It relies on assumptions I can’t justify and broad generalizations that, if evenly applied, would hurt the case for myself as superior even more so than the current job holder. Of course, I can invent a narrative where I am the superior sports team manager, but that narrative would have to rewrite my entire personal history going back so far as to render me a completely differ human, and one who no doubt would have just as many (and possibly the same) blind spots.

I guess what I’m saying is that I know I shouldn’t be the manager. Every rational bone in my body knows that is a silly idea and I would fail miserably. But I think there is a case to be made that sometimes we can look at the person making decisions for our favorite team, look at their track record, and confidently say “They would be making better decisions if they talked it over with me.” When the armchair quarterback says ‘the coach is an idiot” they’re not saying they want to be the coach. They’re saying they want to be in the room. They want a voice because they think they could contribute.

Someone tell Tottenham Hotspur that I’m available. I’m not free, but I can be had.

Singing IPUMS Praises

This is a late post, but I just want to sing the praises of IPUMS.

I first encountered IPUMs data in Sacerdote’s paper on intergenerational human capital transfers in which he showed literacy rates by birth cohort throughout the 19th century (figure 4 is downright beautiful). I’ve since dug-in myself concerning school attendance and human capital.

In the papers that students write in our econ elective classes, it’s not unusual for them to contain FRED data. Given that we don’t teach time-series, the papers are usually empirically weak. But this semester in my Wester Economic History course, I’ve encouraged student to utilize IPUMS. There are 4 students who are using it whose ideas I will surely publicize in the future:

  • Historical patterns of deaf employment, education, human capital, & income
  • The economic impact of the Brooklyn bridge
  • The composition of US interstate migrants relative to their host state
  • Patterns compulsory schooling

IPUMS is so darn rich. I strongly recommend it if you haven’t yet taken advantage of it.

Stoned Age Cave Paintings

It has long been argued that many of the artists drawing on cave walls were not merely trying to draw the external world as accurately as possible. Rather cave art was:

A deliberate mix of rituals inducing altered states for participants, coupled with brain chemistry that elicits certain visual patterns for humanity’s early chroniclers.

The cave painters had rituals that involved taking drugs (undoubtedly plants) that they consumed in a frenzy to get to this creative state. This behavior and the same results were noted by 1960s-era academics studying the effects of peyote, a hallucinogenic cactus found in North America.

Some drawings which illustrate these patterns are:

There seem to be a number of geometric patterns like honeycombs, tunnels and funnels, cobwebs, and spirals which show up repeatedly across different continents. This has fueled speculation that those prehistorics were tripping out on veggies like peyote and magic mushrooms. In his “Stoned Ape” theory, the late Terrence McKenna proposed that consumption of shrooms gave the earliest humans higher energy and group cohesion and helped humanity to evolve the use of language.

A more recent study by Tel Aviv University researchers suggests that another way that Stone Age artists got into an altered state was plain oxygen deprivation. Many sites of cave art, particularly in France and Spain, are at the end of long, narrow passages. If a couple of guys got into one of those rooms, with a blazing torch or two, the oxygen level would soon be significantly depleted:

They found that oxygen concentration depended on the height of the passageways, with the shorter passageways having less oxygen. In most of the simulations, oxygen concentrations dropped from the natural atmosphere level of 21% to 18% after being inside the caves for only about 15 minutes. 

Such low levels of oxygen can induce hypoxia in the body, a condition that can cause headache, shortness of breath, confusion and restlessness; but hypoxia also increases the hormone dopamine in the brain, which can sometimes lead to hallucinations and out-of-body experiences, according to the study.

Drawings like the following from the Altimira cave are pretty impressive under those circumstances:

Don’t arbitrage time with friends

As you may have already heard, the US suicide rate dropped 6% last year. During a pandemic. During a lockdown. During a time when rates of depression have reportedly increased. This is all quite surprising to many people, myself included. I don’t have a convincing explanation, only a single relevant thought.

I think we’ve rediscovered regular long-distance communication with people that have drifted out of our lives and many of us are better for it. I know I am.

While I think that loneliness and isolation are major force behind a lot of social ills, I also know that the “loneliness epidemic” was always a poorly constructed metaphor at best, and possibly only weakly observed at worst. But I also suspect that loneliness and isolation are phenomena in the tails of the distribution. Isolation doesn’t happen to people with average or even below-average social networks. It happens to people entirely without them, for whom their strongest connections with other humans have dissolved. Such things do not always reveal themselves statistically, at least not without looking really hard.

We have observed the emergence, and now dominance, of asynchronous communication. We text, email, tweet. We post on Instagram, Facebook, Snapchat, or TikTok. These are all means of communication, but (with the possible exception of texting) these forms of communication exist outside of real-time. They don’t command chunks of contiguous time– they arbitrage the fractions of time that previously existed between activities and went uncommitted to a narrowly defined task.

I don’t know what causes loneliness, but I do know that its much easier to not feel lonely when you are spending fully committed time, and not arbitraged fractions, communicating with another human being. If you’re under the age of 40, its almost socially illegal to voice call a friend to talk. For many it would be viewed as an act of emotional aggression, an imposition of social need, if not anxiety, on another. The irony of this millennial norm I’ve unfairly placed on them through nothing but my own unreliable observations is that it strikes me as an accelerated path to friendless boomer sad-dad suburban isolation.

The pandemic hit and many of us had to start Zooming in to work. And we had to explain Zoom and Google Meetings to our parents so we could talk to them. But I think a lot of us started catching up with old school buddies, too. Folks you sent Christmas cards to or caught up with at a cookout the Sunday after Thanksgiving. It became completely normal to schedule a call in advance – to put it on a calendar and reserve that time. And I think a lot of people who moved for work or relationships, who after 10 years changed to a new office where they didn’t know anyone, who maybe had simply fallen out of step with friends after the first four years of trying to keep triplets fed — I think a lot of people really enjoyed the pandemic-driven need for reserving time for contiguous social interaction in a manner entirely unconstrained by geography. And maybe they ended up feeling less lonely for it.

Keep Zooming your friends far away. Keep putting it on the calendar. Do it forever.

The Tall and Short of Student Experience

Every semester in my intro STAT course I have my students create a variety of survey questions. After I combine their questions into a single survey, they collect responses from the student body at Ave Maria University. Most of the questions are vanilla. Other are not. They typically get in excess of 100 responses from the ~1,100 person student body.

While exploring the data, I found a really beautiful example for the week that we spend on multiple regression and dummy variables.  The survey results illustrate a clear, linear association between student height (inches) and their student experience at AMU (scored 1-10).

So strange! Why might this be? Except for that solitary 7 ft+ student on the basketball team, how in the world might height matter for student experience?

As it turns out a separate relationship holds the key.

Confirmed with a simple unpaired t-test (unequal variances), women rank their student experience much more highly. For this, students have multiple explanations at the ready.

  • Our school is in a rural location and women are more socially satisfied.
  • Men are less happy generally.
  • Men are less studious or have lower grades.
  • Men get less sleep and stay up later

The list goes on and I don’t know what the reasoning is or which ones actually play a role. But what I do know, is how to make fun scatterplots in Stata. As it turns out, if you control for sex, height loses all of its effects on student experience. Men are taller on average and they aren’t happy students relative to women (apparently). We can see in the figure below that all of the action in the two fitted lines occurs in the intercept. The slopes are practically flat for both men and women. In other words, height neither adds nor subtracts from a student’s experience rating.

What’s going on is that neither men’s nor women’s experience is affected by being taller. But, what’s actually going on here – you know – statistically? The simple version is that the bar chart above dominates the scatter plot. If we subtract the mean male experience score from the male values and do the same for the females, then we’re left with what is practically white-noise. How do all those other students of a different height experience the world? Well, as students, not so differently from you.

The Problem is the Science

The University has been the engine of basic science in the US and abroad for a long time. Any hand-wringing in recent years over its imminent obsolescence was borne of advances in remote learning and new found capacities to exponentially scale single instructors to reach tens, if not hundreds, of thousands of students across the globe. How, in this brave-ish new world, would matriculant tuition accruing to a handful of instructional specialists/celebrities continue to subsidize the scientific mission?

If the arrival of YouTube and Khan Academy gave credence to the academic apocalypse theory, then the coronavirus pandemic and the global adoption of Zoom instruction would surely make a reality of it. I will admit, for the first time in my career, I’m seeing the cracks in the edifice of the academy. And, yes, it was the pandemic that made them more prominent to me.

But its not on the educational side of our dual mission. It’s the science.

Dr. Katalin Kariko is very likely to win a Nobel prize for her immense contributions to our understanding of messenger RNA (mRNA) and how it can be manipulated to create an entirely new class of vaccines that, it is not hyperbole to say, stand to offer a global shift in health. The prospect is there for not just an HIV vaccine, or a broad-spectrum influenza vaccine, or a malaria vaccine, but the broad mitigation of viruses as a burden on humanity.

Dr. Kariko has been pursuing her scientific mission with a single-mindedness that jumps off the page in everything that has been written about her. What also jumps off the page, at least to those of use who have been trying make a career in academic research, is the university system that has worked diligently for decades to push Dr. Kariko, and her scientific mission, out of the academy. At every stage of the hiring, retention, and grant application process, Dr. Kariko’s research has been bludgeoned with not so much criticism or doubt, but what seems more like horrifying indifference. Grant reviewers saw little value, her colleagues noted that she lacked finesse in writing grant applications, and the academic institutions that employed her saw little value in employing someone, even for less than $60k a year in salary, that was unable to consistently bring in large grants (sidenote: her husband often estimated her effective wage to be roughly a dollar an hour: from the university’s point of view, it wasn’t the expense she represented on the balance sheet, it was the opportunity cost of the grants she wasn’t winning that someone else in her slot would).

This is a problem.

To be clear, this indifference is far more damning than any sort of broad disagreement would have been. The nature of science is such that most advances are incremental, but every now and then there are the rare revolutionary upheavals, where something we thought we absolutely knew for sure turns out to be completely wrong. That scientific mavericks that push such theories, most of which are completely wrong, meet resistance is natural (and probably optimal). But indifference is a problem, because indifference does more to reveal the underlying incentives propelling researchers. Universities were indifferent to her research because it wasn’t generating grant money, and that is the job she was hired to do.

Patents are great. Prestigious awards are welcome. Published papers are not entirely a waste of your time. But make no mistake, if you don’t successfully apply for grants, your days in academic science are numbered. I spent three years as an oddly appointed economist in arguably the greatest medical school of the last decade. I got to hang around brilliant physicians who spent a lot of their time every week actually (not figuratively or indirectly) saving lives. I also witnessed dedicated researchers break down into tears upon receiving the news that their grant application had been denied, which meant their contract with the university would not be renewed and their research career effectively terminated. I saw how little grant application aptitude correlated with talent or passion. I saw people thrive in system while others failed, with little in the way of scientific aptitude to distinguish them.

The most practical advice I was privy to was this: work in someone’s lab, pursue your project in parallel with their resources. Once you have an advance that would be worthy of a grant application, write up the application for a project you‘ve already completed. List your previous PI as a collaborator, promise exactly the results you already have, describe your budget, schedule, and proposed outputs in shocking detail, and then radically oversell the importance of the discovery. Once you win the grant, use that money to pursue your next project while writing up the outcome of your previous one. Once you have results, apply for yet another retrospective funding grant, and continue to daisy chain that until you win a massive grant, a coveted NIH R-1 perhaps, within which you can bundle a series of projects, hiring as many post-docs and early researchers as you can. You will then manage this team who will execute your research while hopefully starting their own retrospective grant application daisy chains. Is this a common strategy? I don’t know – it seems odd that the dates of human subjects testing could be obscured. But the point was made to me – this isn’t about science, this is a career life-or-death game where only the 20% of applicants are funded.

To be honest, I don’t care that people are gaming funding institutions. And, to be clear, “playing the game” is part of any career, no matter how idealistic you want to be. Academic research science is in deep, deep trouble, however, if grant application gamesmanship dominates scientific ingenuity in the talent acquisition and retention strategies of major universities. It means we’re no longer scientists, we’re rent-seekers. We’re the person in the village best at memorizing Mao’s Little Red Book: smart, talented, but in the end wasted. Or, much worse, we’re just poseurs.

Piecing together what I’ve read in articles and her Wikipedia entry, after Penn demoted her to adjunct status, Dr. Kakri found a home at BioNTech in 2013, where they and other biotech firms saw tremendous value in her work, yada yada yada, her research with Draw Weissman saved millions of lives going forward and maybe just the whole damn world.

Two takeaways:

  1. If Penn, after demoting her to being an adjunct, tries to claim her and her work as their own we riot.
  2. What is the marginal value of university research if all we’re producing is grant applications?

Part of the blame, of course, has to be placed at the door of the NIH and NSF grant application review process. But how much longer are they going to matter either?

  1. 2021 NSF Budget: $8.5 Billion
  2. 2021 NIH Budget: $43 Billion
  3. Tesla Market Cap: $650 Billion
  4. Elon Musk net worth: $167 Billion

The whole point of the NSF, NIH, and the academic research project is the production of the public good that is basic science. Absent private profit incentives, they should be able to pursue the big picture project that are too broad in application for private companies and the high risk-high reward projects that are or venture too risky even for venture capital.

The advantages of government agencies, however, are limited if they are overwhelmingly surpassed in scale by private market science. Even if 99% of firms can’t overcome the public goods problem, the 1% (ironically within public economics what would be referred to as “privileged groups”) of firms that stand to profit from advancing basic science have the scale to execute such ambitions. More importantly, however, they may also have better incentives. Yes, they are greedily trying to make a profit off of their innovations, but at least the innovation remains their goal.

I’m not worried about the value of university professors as educators. It turns out that education doesn’t scale as well as we thought. That there is tremendous value to be in a room together when you’re trying to pass on explicit, complex, and tacit knowledge. Nor am I worried in the slightest about capital-S Science. There is a bright future for any and every institution producing science, even the most basic, broadest science that no private company or patent strategy could ever exclude others from benefiting from. But, I’m afraid, there is no future for the production of grant applications or the institutions that pursue them at the expense of brilliant minds trying to solve our most important puzzles.

Compulsory Schooling by Sex

My previous posts focused on the aggregate school attendance and literacy rates for whites before and after state century compulsory schooling laws were enacted. When aggregates fail to deviate from trend after a law is passed, the natural next step is to examine the sub groups.

How did attendance rates differ by sex before and after compulsory school attendance? I’ll illustrate a plausible story. Prior to law enactments, boys attended more school because girls were needed to perform domestic duties and the expectations for female education was lower. As a result, boys had higher literacy rates due to higher school attendance. After law enactments, both girls and boys attended school more and the difference between their attendance rates is eliminated. Similarly, literacy rates converge and differences are eliminated. In short, the story is consistent with an oppressed – or at least disadvantaged – position for girls that was corrected by compulsory schooling.

Formally, the hypotheses are:

Continue reading