People of the past were not irrational morons

That sentence is one that I repeat every time I teach economic history. It is repeated because a common misconception in history is that there are “different mentalities”: a pre-capitalist mentality versus a capitalist mentality; a western mentality versus a non-western one etc. The variations are endless but the common denominator is quite simple: there are discontinuities in economic rationality and these discontinuities explain economic change.

That, as I explain to my students, amounts to labelling people of the past as “irrational morons” who would leave $100 bills on the sidewalk. There are no variations in rationality, merely variations in constraints and incentives. That is what I tell my students. And the thing is, that statement is actually testable! Indeed, arguing that something in people’s brain changed is an argument that can never be tested because they are dead and cannot testify. In fact, even if they were alive, their statements would be meaningless because nothing speaks louder than actions (i.e. preferences are revealed by action). Making statements about the rationality of X or Y action is easily testable as we can observe what people did (or do now). And its really easy to refute differences in “mentalities”

Let me give you an example from my native Canada. In Canada, there is a large French minority (the majority of which lives in Quebec) which has long been argued to hold different economic mentalities than the neighboring English majority. Peddled (yes, that is a strong term but I think it applies) by both French and English historians (and economists), this view is used to explain the relative poverty of the French minority (which has historically been 60%-75% as rich as the English majority). As far back as the early 19th century, French-Canadians are argued to have clung on to archaic farming techniques even though they observed better techniques from their English-speaking neighbors. Their “traditional conservative” outlook (in the words of an eminent Canadian historian) pushed into economic stagnation (and even retrogression by some accounts). This view continues today. I vividly remember a debate on French-Canadian TV with former Quebec premier (like a governor for Americans) Bernard Landry telling me that there was a difference between my “anglo-saxon economic worldview” (i.e. neoclassical economics) and that which most French-Canadians held.

The virtue of this example is French-Canadians are deemed to be of a “lesser” mentality than English-Canadians at the same moment in time. Thus, it is easy to test whether this is the case. In multiple works, notably in this paper at Historical Methods, I have used simple tools from economic theory to assess this lesser mentalities hypothesis. Start from a simple Cobb-Douglas production function:

Where A is technology residual (also known as total factor productivity or TFP), Y is total output, K is the capital stock and L is the labor supply. The exponents are just the elasticities of capital and labor. If the English and French in Canada are separated, there are two production functions (with one for each) and they can be divided by each other. But the neat part about the Cobb-Douglas function here is that you can rearrange the equation and solve in terms of A rather than Y. As A is total factor productivity, it tells us how effectively people combine inputs K and L to produce Y. And then you can express A in the French sector (1) as a ratio of A in the English sector (2) as in the formulation below

Technically, if the French farmers were less efficient than the English farmers the ratio on the left-hand side should be less than 1 (as A1 < A2 ). Using data from the 1831 census of Lower Canada (as Quebec was known then), I compared farms in French areas to farms in English areas. The results? Yes, the French farmers were poorer (income Y1 < Y2 ) but there were very small differences in efficiency of input use (A) between French and English farmers as can be seen in the table below. French areas were 4.3% to 0.5% less efficient than English areas.

But when you controlled for land quality, distance from urban markets, recency of settlement, complementary industries and other controls, there are no statistically significant effect of culture (proxied in the table below by share of Catholics as all French-Canadians in 1831 were Catholic and very few English-Canadians were Catholic). In other words, the small differences have nothing to do with culture or differences in mentalities.

Notice that this was a relatively simple logical test. The farming actions of French-Canadians were observed in the data. We know which inputs they chose to use (in which quantities as well). The results of these actions are easily observable through the output data in the census. Irrationality on their part is thus easy to test as a simple Cobb-Douglas model suggest that irrationality would be manifest by an inferior ability to use and combine inputs. They used inputs equally well as English Canadians and so that claim of inferior mentalities was wrong.

One could reply that I am just picking an easy case to dismantle the “mentalities” claim. But I am actually late to that party by adding the French-Canadians. Similar claims have been made for Russian, French, Italian, Chinese, Vietnamese, Korean, Mexican, Indian, Polish, New Englanders (yes, you read right), Danish, Irish, Kenyans, Algerians, Egyptians etc. Hundreds of economic historians and economists have shown that these cases do not hold.

If you wish to explain economic change (or economic disparities), you have to look elsewhere than “changes in mentalities” (or differences in mentalities). If you dont, you are essentially claiming that people of the past were irrational morons who simply lacked your expert knowledge.

50% Endowment Returns Driven by Private Equity Investments: How Rich Universities Get Richer (But You Can, Too)

A recent headline in the Dartmouth student newspaper reads, “Dartmouth’s endowment posts 46.5% year-over-year returns, prompting additional spending on students”.  That seems like really great investing performance. But the sub-headline dismisses it as less-than-stellar, by comparison: “The endowment outpaced the stock market, but fell short when compared to other elite universities that have announced their endowment returns.” After all, fellow Ivy League university Brown notched a 50% return for fiscal 2021, which in turn was surpassed by  Duke University at 55.9% and Washington University in St. Louis at 65%. The Harvard endowment fund managers are a bit on the defensive for  gaining “only” 34% on the year.

The stock market has done well in the past year, but nothing like these results. What is the secret sauce here? Well, it starts with having money already, lots of it. That enables the endowment managers to participate in more esoteric investments. This is the land of “alternative investments”:

Conventional categories include stocks, bonds, and cash. Alternative investments include private equity or venture capital, hedge funds, managed futures, art and antiques, commodities, and derivatives contracts. Real estate is also often classified as an alternative investment.

It takes really big bucks to buy into some of these ventures, and it also takes a large  professional endowment fund staff to choose and monitor these sophisticated vehicles. Inside Higher Ed’s Emma Whitford notes:

Endowments valued at more than $1 billion, of which there are relatively few, are more likely to invest in alternative asset classes like venture capital and private equity, recent data from the National Association of College and University Business Officers showed.

“Where you’re going to see higher performance are the institutions with endowments over a billion,” Good said. “If you look at the distribution of where they’re invested, they have a lot more in alternative investments — in private equity, venture capital. And those asset classes did really well. Those classes outperformed the equity market.” 

…Most endowments worth $500 million or less invested a large share of their money in domestic stocks and bonds in fiscal 2020, NACUBO data showed. This is partially because alternative investments have a high start-up threshold that most institutions can’t meet, according to Good.

“You have to have a pretty big endowment to be able to invest in that type of asset class,” he said. “If you have a $50 million endowment, you just don’t have enough cash to be able to buy into those investments, which is why you won’t see big gains from alternatives in those smaller institutions.”

Virginia L. Ma and Kevin A. Simauchi report in The Crimson on Harvard’s Endowment, “Harvard Management Company returned 33.6 percent on its investments for the fiscal year ending in June 2021, skyrocketing the value of the University’s endowment to $53.2 billion, the largest sum in its history and an increase of $11.3 billion from the previous fiscal year.” This 33.6% gain, though, represents underperformance compared to Harvard’s peers; this is rationalized in terms of overall risk-positioning:

However, Harvard’s returns have continually lagged behind its peers in the Ivy League, a trend that appeared to continue this past fiscal year. Of the schools that have announced their endowment returns, Dartmouth College reported 47 percent returns while the University of Pennsylvania posted 41 percent returns.

Narvekar acknowledged the “opportunity cost of taking lower risk” in Harvard’s investments compared to the University’s peer schools.

“Over the last decade, HMC has taken lower risk than many of our peers and establishing the right risk tolerance level for the University in the years ahead is an essential stewardship responsibility,” Narvekar wrote.

In 2018, HMC formed a risk tolerance group in order to assess how the endowment could take on more risk while balancing Harvard’s financial positioning and need for budgetary stability. Under Narvekar’s leadership, HMC has dramatically reduced its assets in natural resources, real estate markets, and public equity, while increasing its exposure to hedge funds and private equity.

There it is again, the magical “hedge funds and private equity”.

Harvard’s fund manager went on to warn that the astronomical returns of the past year were something of an anomaly:

At the close of his message, Narvekar cautioned that despite the year’s success, Harvard’s endowment should not be expected to gain such strong returns annually.  “There will inevitably be negative years, hence the importance of understanding risk tolerance.”

The following chart illustrates, at least in Harvard’s case, how extraordinary the past year has been:

Source:  Justin Y. Ye

The fiscal year of these funds typically runs September to September, so it’s worth recalling that back in September of 2020 we were still largely cowering in our homes, waiting for vaccines to arrive. The equity markets were still down in September of 2020, whereas a year later the tsunami of federal and Fed largesse had lifted all equity boats to the sky. So, it is not realistic to expect another year of 50% returns.

Final issue: can the little guy pick up at least a few crumbs under the table of this private equity feast? In most cases, you have to be an “accredited investor” (income over $200,000, or net worth outside of home at least $1 million) to start to play in that game. From Pitchbook:

Private equity (PE) and venture capital (VC) are two major subsets of a much larger, complex part of the financial landscape known as the private markets…The private markets control over a quarter of the US economy by amount of capital and 98% by number of companies….PE and VC firms both raise pools of capital from accredited investors known as limited partners (LPs), and they both do so in order to invest in privately owned companies. Their goals are the same: to increase the value of the businesses they invest in and then sell them—or their equity stake (aka ownership) in them—for a profit.

Venture capital (VC) is perhaps the more attractive, heroic side of this investing complex:

Venture capital investment firms fund and mentor startups. These young, often tech-focused companies are growing rapidly and VC firms will provide funding in exchange for a minority stake of equity—less than 50% ownership—in those businesses.

Some examples of VC-backed enterprises include Elon Musk’s SpaceX, and Google-associated self-driving venture WayMo.

Venture capital takes a big chance on whether some nascent technology will succeed (in the fact of competition) many years down the road, which has the potential to make the world a better place for us all. Private equity, on the other hand, tends to be somewhat more prosaic, predictable, and sometimes brutal. Here is putting it nicely:

Private equity investment firms often take a majority stake—50% ownership or more—in mature companies operating in traditional industries. PE firms usually invest in established businesses that are deteriorating because of inefficiencies. The assumption is that once those inefficiencies are corrected, the businesses could become profitable.

In practice, this often entails taking control of a company via a leveraged buyout which saddles the new firm with heavy debt, firing lots of employees, improving some strategy or operations of the firm, and sometimes breaking it up and selling off the pieces. This was the fate of several medium-sized oil companies that got in the cross-hairs of corporate raider T. Boone Pickens.  “Chainsaw Al” Dunlop also became famous for this sort of “restructuring” or “creative destruction”.

Private equity activities can be very lucrative. But again, is there any way for you, the little guy, to get a piece of this action? Well, kind of. There are publicly traded companies who do this leveraged buyout stuff, and you can buy shares in these companies, and share in the fruits of their pruning of corporate deadwood. Some names are: Kohlberg Kravis Roberts (KKR), The Carlyle Group (CG), and The Blackstone Group (BX). The share prices of all these firms have more than doubled in the past year (100+ % return). If you had had the guts to plow all your savings into any one of these private equity firms a year ago, you would have had the glory of beating out all those university endowment funds with their piddling 50% returns.

Dry turkey and mediocre side dishes are optimal

The Take Economy demands not just that you distinguish yourself with opinions that deviate from the median person, but that the manner in which your opinion deviates is immediately distinguishable from everyone else who is similarly deviating. This leaves us with a tendency to focus on what we don’t like – enjoying something is further evidence of the monoculture, while hate comes in a million shades of beige.

I bring this up because hating Thanksgiving foods, particularly turkey, oven-baked turkey, has been in vogue for years, and I’m sure stuffing is next. Everything is too dry, too bland, yada yada yada. It’s a boring take most often made by boring people. Not that such things usually matter to me, but in this case it does because Thanksgiving as a meal is not an epicurean holiday, it’s an attempt to solve a coordination game across families, friends, and geographies. When solving a coordination problem with so many players, with preferences and cost-constraints that make broadly amenable large-scale get-togethers increasingly difficult. Between navigating travel costs, sleeping arrangements, and the inevitable negative political externalities that some jackass in your family is going to pollute the familial air with, the last thing you have the resources to cope with is culinary coordination. So what do we do? We come up with a pseudo-national, heavily regional menu that we coordinate on, a$1.99 per pound Schelling point that’s a steal at thrice the price.

The turkey’s too dry? Drown it in gravy. The stuffing is too bland? Your aunt has hot sauce in her purse. Your cousin is explaining the vagaries of 18th century 2nd amendment judicial rulings? There’s a bottle of brown liquor quietly being shared on the porch this very minute that you can partake in for the price of nothing more than a pleading glance and keeping your politics to your self.

The food isn’t the point, but if you’re still feeling the pain of a sub-optimal meal, you can order Chinese with us later, and I’ll happily explain to you why you’re not just ordering the wrong dishes, you’re ordering off the wrong menu. Because I got food takes, just not when the meal isn’t about the food.

What kind of return do we want on our investment?

I had never spoken with someone so enthusiastic about what they could do with trash. A young, slender man from a city in India I was unfamiliar with explained to me how his machine transformed plastic refuse into an economically and environmentally superior substitute for concrete. This sort of techno-optimism fodder for “your daily reason to feel good” clickbait article provokes, but rarely maintains my optimism upon closer inspection. I listened, impressed by the person I was talking to, but unsure what to make of his elevator pitch.

Then he reached into his backpack and handed me a very real block of “Plascrete”. We’ve spent so much of the last 20 years inundated with ideas that were abstract software application propositions at best, and vaporware at worst, it was jarring to hold the physical manifestation of someone’s “big idea”. I spent the rest of the conversation, and a good chunk of my evening at the “unconference” being hosted by Emergent Ventures, thinking about the economic ramifications of Plascrete. What it could mean for developing countries to have a substitute for concrete that is 24 times stronger yet somehow 4 times cheaper – what it would mean for infrastructure and vertical housing construction. What the streets might look like picked clean of plastic bags and refuse. What happens to the lower tail of the wage distribution after the marginal product of trash-picking labor quintuples. How forecasted carbon emissions from developing countries might shift if the expected carbon footprint of construction were massively reduced.

But this post isn’t about Plascrete or the projected impact of any particular innovation. What I’m interested in this moment is the market for private venture capital.

The model of modern venture capital is built around the biggest of wins, those home run investments whose returns compensate for the more than 90% that largely fail. For the strategy to function, of course, means that every investment has to carry the possibility of prodigious returns, in the realm of 10 to 20-times investment, which limits the industries and technological categories under consideration. Tight profit margins are out. Factories, physical capital are out. Anything that might carry an inherent limitation to rapid scaling are out. What’s in are network consumer goods and zero marginal cost (e.g. software) products. So what does that leave out? Explicitly physical goods, such as inputs into shelter or food, things that require upfront investment in equipment where those costs increase with the scale of your output aspirations.

But that’s actually only the beginning of our problems. What about goods with enormous positive externalities, i.e. social benefits, that exist without the possibility of traditional property rights and monetization? Even if a private venture fund is culturally interested in such things, they are constrained by their model – any reduction in potential home run returns from their investment puts the short run solvency of their fund at risk, something unlikely to be tolerated by their investors. These problems are only compounded when considering positive externality generating technologies that are burdened with traditional physical capital needs and historically normal limits to scaling. Even if your product offers 100X social returns, that’s not going to keep the lights on for a series of high risk investments with private returns that top out at 5X.

Innovations whose adoption offers enormous positive externalities are, in theory, exactly why public support for general science exists (whether or not such things should fall within the domain of public funding agencies is a whole different question that I have no immediate interest in addressing). Let me simply say here that these hypothetical products require expertise in delivering a product to market and the capacity to appropriately take on risk. These are not the comparative advantages of large federal science funding agencies. Which leaves us with the dilemma motivating this entire rambling thought exercise. There seems to be an important gap in the market- and government-based institutions for funding innovation.

What I want to consider is the possibility of elevating the status and profile of private venture capital that goes towards profitable, self-sustaining technologies whose returns might only be considered prodigious if we include the broader positive externalities they have on human lives. The kinds of effects whose value may in fact scale exponentially as they diffuse through communities and networks, but will never be internalized into profits via property rights. I want to consider the reconstruction of the risk profile of an entire portfolio to optimize the ability of a fund to support these sorts of innovations in perpetuity. Earning returns sufficient to produce returns sufficient to self-sustain with a minimum of (if any) long run philanthropic subsidy.

Private capital with such a focus would find a niche that modern venture funds are unmotivated to serve and public scientific agencies are ill-equipped to support. Private funds focused on innovations with externally scaling returns would, in my half-baked hypothesis, would take on a two-tier model. The first tier would be composed of small investments scattered across a large number of very small grants (which is essentially the entire model of Emergent Ventures – I’m essentially plagiarizing the model I saw evidence of through two days of conversations with their grant recipients). These grants would predominantly be interested in people. These human lottery tickets would pursue their initial ideas through the proof-of-concept stages. Some would succeed, most will not, but all will benefit from their first connection to the broad international network of technology talent and talent-seekers. The small number who do succeed in producing compelling evidence of technological advancement would then enter the second tier, where large investments would be sought for a prototype and eventual distribution. What’s important to remember is that this remains a private good that must still enter and pass the market test. What distinguishes it is not an inability to economically self-sustain, but rather it’s inability to create profits so grandiose that it can subsidize a portfolio of failed moonshots. It’s prospective profitability need only justify it’s own independent risk of failure. This is not to say the bar is actually lower than traditional venture capital. While the profitability bar is lower, it must exceed a second, in many ways more difficult bar – it must produce a direct and attributable positive externality, be it through health, safety, or environmental channels. Its consumption must improve lives of not just its consumers, but those entirely uninvolved in its production or purchase.

I’m not an expert in venture capital or speculative philanthropy, but after the last week I can’t shake the idea that Michael Kremer was even more right than we realized: more people => more ideas => more economic growth. There are billions lottery tickets lying on the ground all over the developed world. We need to invent newer and better ways of picking more of them up.

Deficits and presidents

Wranglings over spending plans, deficits and public debt increases have been quite intense of late. What is quite surprising, at least at first glance, is that there are so few individuals who are arguing for reducing public spending. Right now, the most “hawkish” policy stance is a slower rate of spending increases. Why the pro-spending tilt of debates?

One could argue that its the pandemic. A crisis is, after all, a natural moment to increase spending. However, that argument is a bit weak now. This position was easily defensible six to twelve months ago, but not today when the economy is starting its recovery. If anything, as recovery is underway, the case for slashing spending levels is stronger than the case for raising them.

So, once again, why the pro-spending tilt? Let me point to the work of James Buchanan and Richard Wagner in Democracy in Deficit. In this work, whose lessons are underappreciated today, Buchanan and Wagner argue that there is an asymmetry in the political returns to fiscal policy. This is known as fiscal illusion. When a deficit occurs, the costs are delayed and thus harder to observe. The benefits are immediate. When a surplus takes place, the benefits are delayed and the costs (i.e. less spending, higher taxes) are immediate.

Second, there’s the far more serious threat of fiscal illusion—that the public’s perception of the true costs and benefits of government expenditures is misconstrued. As long as the costs of taxation are underestimated and that the benefits of public expenditures are overestimated, there’s fiscal illusion. The nature of politics thus creates a strange incentive system where governments reap more electoral rewards from deficits than from surpluses. If you buy Buchanan and Wagner’s explanation, the pro-spending tilt is easy to explain.

However, the empirical evidence for this is somewhat limited. For example, Alberto Alesina in the 1990s, showed that he could not find empirical patterns that confirmed Buchanan and Wagner’s theorizing. But, I have recent work (co-authored with Marcus Shera of George Mason University — a good graduate student of mine) which proposes a simple mechanism by which to observe whether the first condition for a pro-deficit/pro-spending tilt is present.

American presidents are incredibly mindful of their historical reputation. As I argued elsewhere, presidents consider historians as a constituency they want to cater to so as to be remembered as great. If there is a reward from engaging in deficit spending given by historians, this would suggest that presidents have at least some incentives to be fiscally imprudent. Phrased differently, such returns by historians would suggest some divergence between what is fiscally prudent and what is politically beneficial.

Using the surveys of American presidents produced by C-Span and the American Political Science Association, Marcus and I found that there are strong rewards to engaging in deficit spending. Without any controls for the personal features (e.g. war hero, scandal, intellect) of a president and the features of a presidency (e.g. war, victory in war, economic growth), an extra percentage point of deficit to GDP is associated with a strong positive reward to a president (see table below). Once controls are introduced, the result remains: there are strong rewards from engaging in deficit spending.

Thus, at any time, a president who is mindful of his place in the history books would be tempted to engage in deficit spending. While Marcus and I are somewhat cautious in the paper, I do think that we are presenting a “lower-bound” case for a pro-deficit bias. Indeed, one could think that the hindsight of history would lead to greater punitions for fiscal recklessness. After all, historians are not like voters — their time-horizons for evaluating a presidency are clearly not as short. If that is the case, one should expect that historians should be less likely to reward deficits. And yet, they seem to do so — which is why I argue this is a lower-bound case.

In other words, Joe Biden might simply believe that the extra spending will secure him a place in history books. If other presidents are any indication, he is making a good bet.

Buying in Bulk: Money Saver or Self Sabotage?

Recently, I’ve been buying a lot more non-durable goods when they are on sale. Whereas previously I might have purchased the normal amount plus one or two units, now I’m buying like 3x or 4x the normal amount.

What initially led me here was the nagging thought that a 50%-off sale is a superb investment – especially if I was going to purchase a bunch eventually anyway. I like to think that I’m relatively dispassionate about investing and finances. But I realized that I wasn’t thinking that way about my groceries. The implication is that I’ve been living sub-optimally. And I can’t have that!

If someone told me that I could pay 50% more on my mortgage this month and get a full credit on my mortgage payment next month, then I would jump at the opportunity. That would be a 100% monthly return. Why not with groceries? Obviously, some groceries go bad. Produce will wilt, dairy will spoil, and the fridge space is limited. But what about non-perishables? This includes pantry items, toiletries, cleaning supplies, etc. 

Typically, there are two challenges for investing in inventory: 1) Will the discount now be adequate to compensate for the opportunity cost of resources over time? 2)  Is there are opportunity cost to the storage space?

For the moment, I will ignore challenge 2). On the relevant margins, my shelf will be full or empty. I’ve got excess capacity in my house that I can’t easily adjust it nor lend out. That leaves challenge 1) only.

First, the Too Simple Version.

Continue reading

Presentation Today – “Firearms and Violence Under Jim Crow”

I’ll be giving a presentation today at 12pm ET over Zoom for the Ostrom Workshop Colloquium Series at Indiana University. It is my understanding that it is open to the public. The format is different than your typical economics seminar. I will give an introduction and brief summary of the paper for 20 minutes followed by questions.

https://ostromworkshop.indiana.edu/pdf/announcements/2021fall-colloq/11-08-makowsky.pdf

You can find the full working paper here.

The history of work and the myth of a leisurely past

Since Marshall Sahlins in the 1970s (and thanks to James Suzman’s Work ), a weird idea has worked its way into popular imagination: people of the past did not work much. Well, more precisely, the idea is that for most of human history our ancestors worked far less and thought very differently about work than we do now. That is based on a weird starting point and a misunderstanding of how “work” works.

The starting point is the pre-neolithic era when the vast majority of time was spent hunting and gathering. In that setting, the effort to acquire calories was modest largely because food was abundant relative to a tiny human population. Some early estimates suggest that, because of that relative endowment, people worked maybe less than 20 hours per week hunting and gathering. Some say even less. That is probably correct and also wrong.

Notice that I italicized hunting and gathering above suggesting that the time commitment of these two tasks was quite small. However, this is not the sum of all work people did then. One has to understand that nomadic groups were nomad in part because the largest share of their calories was also quite mobile. This meant moving around significantly to track food under a key constraint — that calories from gathering be available.

This meant that people moved from “oasis” to “oasis” or from “patch” to “patch”. Between each patch/oasis, there was a lot of time spent “in transit” (let’s call this d for dead time). That time is technically not work for hunting or gathering — but it is work. Not counting it is a mistake.

To see how it matters, consider the graph below which depicts a forager who moves between oases/patches where food is available. As they stay in a oasis, the yield of food y is marginally decreasing so that at one point he may have an incentive to move on. When he moves on, he incurs the cost d which is dead time while moving. Suppose also that a single oasis/patch per year (which encompasses multiple time periods) is insufficient to survive the year. Thus, multiple patches must be exploited. Supposing that all oases are equally distant, of equal quality and that there are many oases in total, how can we picture the decision to move to another? If you want to maximize your food intake over a long period of time, you have to go to multiple oases in a year. This is where we introduce the dashed blue line which is the total yield from all oases/patches divided by time. Notice that it starts at origin so that we are capturing the cost of d.

Figure 1: How people in the past worked

These two lines tell us that you stay at a single oasis until its marginal return is inferior to the average yield over all oases/patches. Why does this matter? Well, imagine the implications if each patch is less productive? You have to move more to reach a certain target and incur d more frequently. That effectively means that you have to exploit a greater territory to meet a certain target of food (e.g. survival).

The estimate of time spent hunting and gathering are essentially the time within patches rather than the time spent for all patches. Thus, there is a massive underestimate. The yield on a single oasis/patch was so low in the pre-neolithic that moving was something that clans did often. In the late Ice Age, family groups apparently moved every 3-6 days. Modern nomads in certain regions move some 400 km per year. At 5km/h, this is 80 hours of work per year. However, that 5km/h is too high as there were children to carry which slow things down. At 3km/h, we are talking 133 hours per year (or roughly 2.6 extra hours per week). This is just dead time but it is work. As such, more exhaustive worktime estimates suggest values of 35 to 43 hours per week. Most western countries are below this level. Moreover, it is worth considering that work started at young ages and there was no retirement. With shorter lives and earlier work-entry, a smaller fraction of awake life-time was spent in leisurely pursuits. Ergo, it is insanely likely that no society today exhibits more “life-time” work than the prehistoric humans.

Finally, it is worth pointing out the very obvious. The introduction of agriculture, by removing the need to move around and also reducing variability in calories (i.e. fewer chances of catastrophes), essentially increased the benefit of working (i.e. making leisure relatively costlier). It is unsurprising then that the introduction of agriculture led to some increases in labor supply. However, that being said, it is clearly false than we work more today than our prehistoric ancestors did. There is no way around it.

You Current Grade: It’s Complicated

By now, most US universities are 4-5 weeks away from the end of the fall semester. Whether it’s now, or just prior to the withdrawal deadline, student tend to demonstrate increased interest in their grade for their courses. They say that they want to know how they are doing. But they often prefer to know what grade they will earn at the conclusion of the course. The answer to the latter question could include all kinds of assumptions. But “What is my grade right now?” is a deceptively subtle question.

It seems direct. We could easily be curt and claim that it shouldn’t be complicated to tell a student what their grade is, and that it’s a failure of the teacher or of the education system writ large if it is complicated. While I entirely agree that a teacher should have an answer, it’s important to emphasize that “What is my grade right now?” is an ill-defined question. The problem is that a student can mean two different things when they ask about their grade.

Q1) What proportion of possible points have I earned so far?

Q2) What proportion of points will I have earned if my performance doesn’t change?

It’s important for teachers to ensure that their students understand which question is being answered.

First, I’ll illustrate when there is no distinction between the answers. Let’s say that there are two types of assignments: Exams, which are worth 75% of the course grade, and quizzes, which are worth 25%, of the course grade. So long as the two assignment types are identically distributed throughout the semester, Q1 & Q2 have the same answer. Below is a bar chart that illustrates a distribution of points over 4 weeks. The proportion of points for each assignment type is identically distributed over time (not necessarily uniformly distributed).

What is the student’s grade at the end of week 2 if they have scored 90% on the exams and 70% on the quizzes? By the end of week 2, there have been 30 possible exam points and 10 possible quiz points. The student has earned 34 of the 40 possible points so far. The math for Q1 is:

(0.9)(30)+(0.7)(10) = 27+7=34

34/40 = 85%

And, if they continue to perform identically in each assignment category, then they can expect to earn an 85% in the class. The math for Q2 is:

(0.9)(75)+(0.7)(25) = 67.5+17.5 = 85%

Both Q1 and Q2 have the same answer. And, honestly, principles or introductory courses have formats that often lend themselves well to having assignments distributed similarly over time. My own Principles of Macroeconomics class matches up pretty well with the above math. Each week, there is a reading, a homework, and a quiz. By the time students complete the first exam, they’ve completed about one third of all points in each assignment category.

Higher level classes or classes with projects tend *not* to have identical point distributions across time among assignments. Maybe there are presentations, projects, or reports due throughout the semester or at the culmination of the course. For example, my Game Theory class has two midterm exams, but no final exam. It has homework in the first half of the semester, and term paper assignments in the latter half.

The bar chart below displays a point-split among the same quizzes and exams, but they now are differently distributed throughout the semester. Quiz points have been frontloaded.

What is the student’s grade at the end of week 2 if they have scored 90% on the exams and 70% on the quizzes? By the end of week 2, there have been 30 possible exam points and 15 possible quiz points. The student has earned 37.5 of the 45 possible points so far. The math for Q2 is:

(0.9)(30)+(0.7)(15) = 27+10.5=37.5

37.5/45 = 83.33%

And, if they continue to perform identically in each assignment category, then they can expect to earn an 85% in the class. The math for Q2 is:

(0.9)(75)+(0.7)(25) = 67.5+17.5 = 85%

All I did was frontload 5 percentage points for quizzes and now the answers to Q1 and Q2 differ by 1.66 percentage points. That may seem like small potatoes. But consider that a) many students and universities use and care about the +/- system of grades, and b) a grade difference of 1.66 points was caused by a mere change of 5-point change in the distribution. Bigger changes result in bigger differences. Frontloading the remaining 5 quiz points from the end of the semester would result in a Q1 score of 82% – yielding a 3 point difference between the two calculation methods.

The differences between Q1 & Q2 illustrated above are even more pronounced once you begin to include extra credit. One point of extra credit has a smaller effect on the answer to Q1 as more and more possible course points have been earned.

If students only care about their ultimate grade in the course, then they will always prefer to receive the answer to Q2. But, students may also want to know how effective their recent study habits have been so that they can re-evaluate them conditional on the knowledge of the assignment point distributions. Q2 requires more assumptions if an assignment type hasn’t even occurred yet. Students can ask “Have I given this course the appropriate amount of attention given the types of assignments that we’ve had?”.

For example, my Principles of Macroeconomics course has the first exam at week 5. Students should have an average score that is greater than 90% by the end of week 4 because the reading assignments are simple, the homeworks are lenient, and the quizzes permit practice attempts. Students who have an 80% by the end of week 4 are going to have a rougher time once they encounter an exam.

Reasonable people can disagree about which calculation is more useful. And more mathematically inclined students can calculate their own grades anyway. Therefore, after every exam, I send a mail-merge email to each of my students in order to update them about their grade. I give them the answer to both Q1 & Q2, and I illustrate the impact of several alternative scenarios for their future performance. If there is information that a student wants about their grade, then it’s in that email.

In conclusion, teachers should take great care in making student grades and progress reports clear. Students should take great care to understand what they are asking and and what the answer means. Grades can be very important for students who are close to the margin for scholarships, academic probation, or failure. While students may care too much about their grades, teachers should be sensitive to the fact that the care is real none the less. Teachers owe their students a firm and clear indicator of performance.

*There is another case in which Q1 & Q2 have the same answer. It’s when the student earns exactly the same grade in each assignment category, regardless of whether the category points are distributed identically across time.

Give someone you love the gift of two hours

The good people at EWED have asked me to recommend a gift for the upcoming holiday season. I know there’s no fewer than three economists that publicly recommend the gift of cash every year. This is ostensibly done in earnest, but really it’s for the LOLs. If we take a slightly more behavioral tact (but only slightly), the optimal gift to give is the thing that people are systematically biased against purchasing for themselves even though it offers a net benefit in exchange. Great. So what are people systematically biased against?

I’d like to suggest people are biased against purchasing things they are a little too good at producing themselves, a sort of “absolute advantage bias”. If you want to give someone a great gift, buy them something they typically produce themselves even though outsourcing it would cost-effectively save them two hours. If you can make manifest in an adult human life two hours of free time you are nothing short of a hero.

Buy them two hours of a cleaning service. Two hours of lawn care. Two hours of babysitting. Two hours of laundry pick up, folding, and drop-off. Two hours of cooking (i.e. a DoorDash gift card). Two hours of car cleaning. Two hours of document proofreading. Two hours of anything that if you recommended it to them they’d shrug their shoulders and sigh “I can’t pay for that when I can just do it myself”.

And it doesn’t matter what they do with the two hours, either – they’ll maximize that with ruthless efficiency. You ever take a two-hour nap on a Sunday afternoon? I defy you to think of anything you can buy an adult for $50 that is better than a two hour nap. I’m getting dreamy-eyed just thinking about it.

Buy the people you love some time for themselves this holiday season. It’s better than cash, it shows you are invested in their well-being, and I’ve never met anyone who couldn’t use it.