Now published: Human capital of the US deaf Population, 1850-1910

Myself and a student coauthor worked hard on our article that is now published in Social Science History. It’s the first modern statistical analysis of the historical deaf population. We bring an economic lens and statistical treatment to a topic that previously included much anecdotal evidence and case study. We hope that future authors can improve on our work in ways that meet and surpass the quantitative methods that we employed.

Our contributions include:

  • A human capital model of deafness that’s agnostic about its productivity implications and treats deaf individuals as if they made decisions rationally.
  • A better understanding of school attendance rates and the ages at which they attended.
  • Deaf children were much more likely to be neither in school nor employed earlier in US history.
  • The negative impact of state ‘school for the deaf’ availability on subsequent economic outcomes among deaf adults. We speculate that they attended schools due to the social benefits of access to community.
  • Deaf workers did not avoid occupations where their deafness would be incidentally detectable by trade partners, implying that animus discrimination was not systemically important for economic outcomes.
Continue reading

Messy Disability Records in the Historical Censuses

The historical US Census roles of disability among free persons are a mess. Specifically for the 1850-1870 censuses, the census bureau was not professionalized and the pay was low (a permanent office wasn’t founded until 1902). So, the enumerators were temporary employees and weren’t experts of their art. To boot, their handwriting wasn’t always crystal clear. Second, training for disability enumeration was even less complete and enumerators did their best with whom they encountered and how they understood the instructions. Finally, the digitized data in IPUMS doesn’t perfectly match the census reports. What a mess.

Guilty by Association

Disabled people and their families often misreported their status out of embarrassment or shame. Given that enumerators had quotas to fill, they were generally not inclined to investigate claimed statuses strenuously. Furthermore, disabled people were humans and not angels. Sometimes they themselves didn’t want to be associated with other types of disabled people. In particular, the disability designation in question (13) on the 1850 census questionnaire asked  “Whether deaf and dumb, blind, insane, idiotic, pauper or convict”. Saying “yes” may put you in company that you don’t prefer to keep.

Summer censuses also sometimes missed deaf students who were traveling to or from a residential school.

Enumerator Discretion

The enumerator’s job was to write the disability that applied. What counts as deaf and dumb? That’s largely at the enumerator’s discretion. Some enumerators wrote ‘deaf’ even though that wasn’t an option. Was that shorthand for ‘Deaf and Dumb’? Or were they specifying that the person was deaf only and not dumb? We don’t know. But we do know that they didn’t follow the instructions. What if a person was both insane and blind? Then what should be written? “Blind/Insane” or “Blind and Insane” or “In-B” and any number of combinations were written. Some of them are easier to read than others.

Data Reading Errors

IPUMS is the major resource for using census data. The historical data was entered by foreign data-entry workers who didn’t always speak English. So, the records aren’t perfect. Some of the records are corroborated with Optical Character Recognition (OCR), but the historical script is sometimes hard to read. Finally, the fine folks at familysearch.org and Brigham Young University have used Church of Latter Day Saints (LDS) volunteers to proof data entries. Regardless, we know that the IPUMS data isn’t perfect and that the disability data is far from perfect. Usually, reports don’t dwell on it. They simply say that the data is incomplete.

The disability data is incomplete for a lot of reasons related to the respondent, the enumerator, the instructions, and the digital data creation. What a mess.

Optimal Protein Consumption in the 21st Century: A Model

I’ve discussed complete proteins before. I’ve talked about the ubiquity of protein, animal protein prices, vegetable protein prices, and a little but about protein hedonics. My coblogger Jeremy also recently posted about egg prices over the past century. Charting the cost of eggs is great for identifying egg affordability. But a major attraction of eggs is that they are a ‘complete protein’. So how much of that can we afford?

Here I’ll outline a model of the optimal protein consumption bundle. What does this mean? This means consuming the quantities of protein sources that satisfy the recommended daily intake (RDI) of the essential amino acids and doing so at the lowest possible expenditure. Clearly, this post includes a mix of both nutrition and economics.  Since a comprehensive evaluation that includes all possible foods would be a heavy lift, here I’ll just outline the method with a small application.

Consider a list of prices for 100 grams of Beef, Eggs, and Pork.* We can also consider a list that identifies the quantity that we purchase in terms of hundreds of grams. Therefore, the product of the two yields the total that we spend on our proteins.

Of course, not all proteins are identical. We need some characteristics by which to compare beef, eggs, and pork. Here, I’ll use the grams of essential amino acids in 100 grams of each protein source. Because there are different RDIs for each amino acid, I express each amino acid content as a proportion of the RDI (represented by the standard molecular letter).

Then, we can describe how much of the RDI of each amino acid that a person consumes by multiplying the amino acid contents by the quantities of proteins consumed.

Our goal is to find the minimum expenditure, B, by varying the quantities consumed, Q, such that the minimum of C is equal to one. If the minimum element of C is greater than one, then a person could consume less and spend less while still satisfying their essential amino acid RDI. If the minimum element is less than one, then they aren’t getting the minimum RDI.

How do we find such a thing? Well, not algebraically, that’s for sure. I’ll use some linear programming (which is kind of like magic, there’s no process to show here).

The solution results in consuming only 116.28 grams of Pork and spending $1.093 per day. The optimal amino acid consumption is also below. Clearly, prices change. So, if eggs or beef became cheaper relative to pork, then we’d get different answers.

In fact, we have the price of these protein sources going back almost every month to 1998. While pork is exceptionally nutritious, it hasn’t always been most cost effective. Below are the prices for 1998-2025. See how the optimal consumption bundle has changed over time – after the jump.

Continue reading

A Forgotten Data Goldmine: Foreign Commerce and Navigation Reports

Economists rely on trade data. The historical Foreign Commerce and Navigation of the United States reports detailed monthly figures on imports, exports, and re-exports. This dataset spans decades, providing a crucial resource for researchers studying price movements, consumption patterns, and the effects of war on global trade.

The U.S. Department of Commerce compiled these reports to track the nation’s commercial activity. The data cover a vast range of commodities, including coffee, sugar, wheat, cotton, wool, and petroleum. Officials recorded trade flows at a granular level, enabling economists to analyze seasonal fluctuations, wartime distortions, and postwar recoveries. Their inclusion of re-export figures allows for precise estimates of domestic consumption. Researchers who ignore re-exports risk overstating demand by treating imports as goods consumed rather than goods in transit.

Continue reading

Trump Cutting & Spending: Day 45

It’s hard to keep up with all of the Trump administration’s activities. There is such a flurry of activity related to funding, regulations, and executive actions that no one can keep up with everything. Individuals and news outlets have scarce resources and attention. There’s the added typical challenge of filtering out fact from analysis. If only there was way to summarize the administration’s activities in an objective and meaningful sense.

Luckily, numbers don’t lie – and the federal government publishes a lot of numbers. Specifically, they publish the Daily Treasury Statement which identifies each day’s various categories of outlays. We can look at the raw number of spending to get a sense for where and whether Trump is changing spending within the federal government.

Lauren Bauer at The Hamilton Project noticed that the US Treasury has an API for those daily statements.  She created a nice online tool at Brookings that is relatively user friendly. Individuals can visit and see each day’s spending or the cumulative spending throughout the year. Below is the cumulative federal spending for 2024 and 2025. As of March 5th, the US has spent a total of 5.2% more in 2025 than in the year prior (that’s on track with the growth rate of GDP). Importantly, she makes all of the data available for download so that individuals can conduct their own analysis. I lean on her data here.

Where have the cuts been happening? The below graph includes the 5 spending areas that have been most deeply cut relative to the same day in 2024.* The red line denotes inauguration day. The USAID cuts made big news, and it seems like they knew something was happening around the time of inauguration. It looks like they were trying to get spending out the door before the taps were shut off. The FCC and the Library of Congress were also affected by the funding freeze that was announced in late January.

President Trump claims to have made cutting waste a priority. With Elon Musk in tow, the administration has made waves by disrupting USAID, the NSF, and federal payroll. We’re 45 days into the administration. We can use the data provided by the Treasury and made accessible by Bauer to evaluate how the Trump administrations has been spending and cutting according to the numbers.

One way to evaluate spending is to compare the cumulative spending over the course of 2024 and 2025. That is, spending on the 45th day of the year should be more or less comparable in 2024 vs 2025. It’s still early in the year and since various payments can be quite irregular, there’s a lot of noise in the data so far. But we should be able to see big changes. Smaller changes will be easier to see as the year goes on.

The 5 areas of greatest cumulative spending growth relative to 2024 are graphed below.* It does look like some funding was trying to get out of the door prior to Trump taking office, but that’s just speculation on my part.   FEMA spending was up, likely due to the fires in California.  Much more US Treasury spending is happening, specifically for Claims, Judgments, & Relief. We might see that remain elevated as the new administration keeps ‘trying’ things and then being stopped by injunctions, being the subject of lawsuits, and owing compensation.

While big percent changes in outlays can have massive implications for individual programs, Musk and Trump will need to cut huge amounts in order to claim any kind of victory over profligate spending. (Just so we’re all on the same page, they will fail if they refuse to touch old-age entitlements.) Where have the biggest spending cuts happened as measured by actually dollars? See below.*** The deepest and most consistent cuts are coming from the reductions in federal employee insurance payments. Similarly, the USAID and FCC cuts amount to a $2 billion cut from this time last year. Department of Education spending and the hospital insurance trust fund are down, but are also more volatile in their expenditures. Those one-time spikes in the data are due to pay dates between 2024 and 2025 being offset by a day or two.

Continue reading

What does the Department of Education even do?

If you follow libertarian media such as Reason Magazine or its ancillaries, then you are well acquainted with the humdrum of “it goes without saying that most US programs should be ended“. They kind of just say this and then continue with their news. One of the favorites is to say that we should get rid of the Department of Education (ED). After all, 90% of K-12 education is paid for by states and localities. Here I was thinking “what does the Department of Education even do”?

Agreement is different from trust. I trust the Brookings Institute. They have a nice explainer on what ED does. It’s a quick overview and has plenty of the appropriate citations. I learned that most of what ED does concerns K-12 and is achieved through grants that have strings attached. Funding primarily goes to serving “educationally disadvantaged” communities (that have a high poverty rate). Funding also goes to programs for disabled children, minority education programs (like Howard University), and Indian tribes. They also administer Pell Grants and fund & regulate college loans (which are privately administered).

ED’s appropriated budget is online for anyone to see and includes pretty good detail about costs. The total discretionary cost of FY 2024 was $79 billion. The “mandatory” spending, which does not need to be voted on by congress every year, was $45 billion. For context, the entire federal FY 2024 expenditure was $6.75 trillion. So, eliminating the department of education *and* it’s responsibilities (an unpopular position) would reduce federal expenditures by 1.8%. For even more context, the budget deficit is $1.83 trillion or 27.1% of total federal expenditures. Eliminating ED and consolidating its responsibilities to other departments would save $0.6 billion. That assumes eliminating program administration, the ED office of civil rights, and the ED office of the inspector general.

Continue reading

Trump’s Economic Policy Uncertainty

I was on a panel of economists last night at an event titled “The Economic Consequences of President Trump”. We each gave a 5-minute summary from our area of expertise and then opened up the floor for questions.  This is a truncated summary of my talk. Since the panel included an investor, two industry economists, and another macro economist, I wanted to discuss something that was distinct from their topics. I’ve published a paper and refereed many articles concerning economic policy uncertainty (EPU) and asset volatility. I wanted to look at the data concerning President Trump – especially in contrast to Presidents Obama and Biden.

EPU matters because uncertainty can cause firms and individuals to delay investment and hiring decisions. Greater uncertainty can also cause divergent views concerning forecasted firm profitability. The result is that asset prices tend to become more volatile when EPU rises. One difficulty is that uncertainty occurs in our heads and concerns our beliefs, making it hard to measure. We try to get at it by measuring how often news media articles include the terms related to uncertainty, policy, and the economy. Since news content tends to report what is interesting, relevant, or salient to customers, there’s good reason to think that the EPU index is a decent proxy.

Using the Obama years as a baseline, the figure below simply charts out EPU. It was relatively low during Trump’s first term and then it was higher during Biden’s term – even after accounting for the Covid spike. The sharp increase toward the end is after Trump won the 2024 election. The EPU series conflicts with my perception of social media and media generally. My experience was that the media was far more attentive to the uncertainty that Trump caused. But, it may just be that the media outlets had plenty to report on rather than it being particularly indicative of EPU. After all, if the president exercises his power, then there is a certain swift decisiveness to it.

But if we look at a couple of particular policy areas, Trump’s administration faired worse. Specifically, Trump caused a ruckus concerning trade policy and immigration. Remember when Biden continued the aggressive trade policy that Trump had adopted? That’s consistent with lower EPU. Similarly, Biden made the immigration process much easier and faster while Trump’s deportation haranguing results in a somewhat stochastic means by which people are deported.  Again, that spike at the end is after Trump won the 2024 election.

Continue reading

Forecasting the Fed: Description Vs Prescription

After raising rates in 2022 to belatedly combat inflation, the FOMC was feeling successful in 2024. They were holding the line and remaining steadfast while many people were getting all in a tizzy about pushing us into a recession. People had been predicting a recession since 2022, and the Fed kept the federal funds rate steady at 5.33% for an entire year. Repeatedly, in the first half of 2024, betting markets were upset that the Fed wasn’t budging. I had friends saying that the time to cut was in 2023 once they saw that Silicon Valley Bank failed. I remained sanguine that rates should not be cut.

I thought that rates should have been higher still given that the labor market was strong. But, I also didn’t think that was going to happen. My forecasts were that the Fed would continue to keep rates unchanged. At 5.33%, inflation would slowly fall and there was plenty of wiggle room for unemployment.

Then, we had a few months of lower inflation. It even went slightly negative in June 2024. Some people were starting to talk about overshooting and the impending recession. I documented my position in August of 2024. Two weeks later, Jerome Powell gave a victory lap of a speech. He said that “The time has come for policy to adjust”.  Instead of discerning whether the FOMC would cut rates, the betting markets switched to specifying whether the cut would be 0.25% or 0.5%. The Fed chose the latter, followed by two more cuts by the end of the year.

I was wrong about the Fed’s policy response function. But why? Was the FOMC worried about the downward employment revisions? That was big news. Did they think that they had inflation whipped? I’m not sure. There was a lot of buzz about having stuck the soft landing. In late 2024, I leaned toward the theory that the Fed was concerned about employment. Like, they thought that we had been doing better until then.

Continue reading

RGDP Underestimates Welfare

Like many Principles of Macroeconomics courses, mine begins with an introduction to GDP. We motivate RGDP as a measure of economic activity and NGDP as an indicator of income or total expenditures. But how does more RGDP imply that we are better off, even materially? One entirely appropriate answer is that the quantities of output are greater. Given some population, greater output means more final goods and services per person. So, our real income increases.  But what else can we say?

First, after adjusting for price changes, we can say that GDP underestimates the value that people place on goods and services that are transacted in markets. Given that 1) demand slopes down and 2) transactions are consensual, it stands to reason that everyone pays no more than their maximum value for things. This implies that people’s willingness to pay for goods surpasses their actual expenditures. Therefore, RGDP is a lower bound to the economic benefits that people enjoy. Without knowing the marginal value that people place on all quantities less than those that they actually buy, we have no idea how much more value is actually provided in our economy.

Continue reading

How FRASER Enhances Economic Research and Analysis

Most of us know about FRED, the Federal Reserve Economic Data hosted by the Federal Reserve of St. Louis. It provides data and graphs at your fingertips. You can quickly grab a graph for a report or for a online argument. Of course, you can learn from it too. I’ve talked in the past about the Excel and Stata plugins.

But you may not know about the FRED FRASER. From their about page, “FRASER is a digital library of U.S. economic, financial, and banking history—particularly the history of the Federal Reserve System”. It’s a treasure trove of documents. Just as with any library, you’re not meant to read it all. But you can read some of it.

I can’t tell you how many times I’ve read a news story and lamented the lack of citations –  linked or unlinked.  Some journalists seem to do a google search or reddit dive and then summarize their journey. That’s sometimes helpful, but it often provides only surface level content and includes errors – much like AI. The better journalists at least talk to an expert. That is better, but authorities often repeat 2nd hand false claims too. Or, because no one has read the source material, they couch their language in unfalsifiable imprecision that merely implies a false claim.

A topical example would be the oft repeated blanket Trump-tariffs. That part is not up for dispute. Trump has been very clear about his desire for more and broader tariffs. Rather, economic news often refers back to the Smoot-Hawley tariffs of 1930 as an example of tariffs running amuck. While it is true that the 1930 tariffs applied to many items, they weren’t exactly a historical version of what Trump is currently proposing (though those details tend to change).

How do I know? Well, I looked. If you visit FRASER and search for “Smoot-Hawley”, then the tariff of 1930 is the first search result. It’s a congressional document, so it’s not an exciting read. But, you can see with your own eyes the diversity of duties that were placed on various imported goods. Since we often use the example of imported steel and since the foreign acquisition of US Steel was denied, let’s look at metals on page 20 of the 1930 act. But before we do, notice that we can link to particular pages of legislation and reports – nice! Reading the Smoot-Hawley Tariff Act’s original language, we can see the diverse duties on various metals. Here are a few:

Continue reading