A few weeks ago, my friend James Dean (see his website here, he will soon be a job market candidate and James is good) and I received news that the Journal of Institutional Economics had accepted our paper tying economic freedom to income mobility. I think its worth spending a few lines explaining that paper.
In the last two decades, there has been a flurry of papers testing the relationship between economic freedom (i.e. property rights, regulation, free trade, government size, monetary stability) and income inequality. The results are mixed. Some papers find that economic freedom reduces inequality. Some find that it reduces it up to a point (the relationship is not linear but quadratic). Some find that there are reverse causality problems (places that are unequal are less economically free but that economic freedom does not cause inequality). Making heads or tails of this is further complicated by the fact that some studies look at cross-country evidence whereas others use sub-national (e.g. US states, Canadian provinces, Indian states, Mexican states) evidence.
But probably the thing that causes the most confusion in attempts to measure inequality and economic freedom is the reason why inequality is picked as the variable of interest. Inequality is often (but not always) used as a proxy for social mobility. If inequality rises, it is argued, the rich are enjoying greater gains than the poor. Sometimes, researchers will try to track the income growth of the different income deciles to go at this differently. The idea, in all cases, is to see whether economic freedom helps the poor more than the rich. The reason why this is a problem is that inequality measures suffer from well-known composition biases (some people enter the dataset and some people leave). If the biases are non-constant (they drift), you can make incorrect inferences.
Consider the following example: a population of 10 people with incomes ranging from 100$ to 1000$ (going up in increments of 100$). Now, imagine that each of these 10 people enjoy a 10% increase in income but that a person with an income of 20$ migrates to (i.e. enters) that society (and that he earned 10$ in his previous group). The result will be that this population of now 11 people will be more unequal. However, there is no change in inequality for the original 10 people. The entry of the 11th person causes a composition bias and gives us the impression of rising inequality (which is then made synonymous with falling income mobility — the rich get more of the gains). Composition biases are the biggest problem.
Yet, they are easy to circumvent and that is what James Dean and I did. We used data from the Longitudinal Administrative Database (LAD) in Canada which produces measures of income mobility for a panel of people. This means that the same people are tracked over time (a five-year period). This totally eliminates the composition bias and we can assess how people within that panel evolve over time. This includes the evolution of income and relative income status (which decile of overall Canadian society they were in).
Using the evolution of income and relative income status by province and income decile, we tested whether economic freedom allowed the poor to gain more than the rich from high levels of economic freedom. The dataset was essentially the level of economic freedom in each five-year window matching the LAD panels for income mobility. The period covered is 1982-87 to 2013-18.
What we found is in the table below which illustrates only our results for the bottom 10% of the population. What we find is that economic freedom in each province heavily affects income mobility.
More importantly, the results we find for the bottom decile are greater than the results “on average” (for all the panel) or than for the top deciles. In other words, economic freedom matters more for the poor than the rich. I hope you will this summary here to be enticing enough to consult the paper or the public policy summary we did for the Montreal Economic Institute (here)
Samford student Savanah Needham identified an interesting recent WSJ article about the use of AI in hiring. Savanah writes:
In The WSJ, we learn that AI is being used for hiring employees rather than a traditional hiring manager, thus job applicants fear that they must impress a robot instead of relying on human interaction to get their dream job. The writer argues that job applicants deserve to know ahead of time how the algorithm will judge them and ought to receive feedback if they are rejected. Her proposal highlights the uncertainty that job candidates face in the newly AI-augmented hiring world.
We desperately need such a system. AI’s widespread use in hiring far outpaces our collective ability to keep it in check—to understand, verify and oversee it. Is a résumé screener identifying promising candidates, or is it picking up irrelevant, or even discriminatory, patterns from historical data? Is a job seeker participating in a fair competition if he or she is unable to pass an online personality test, despite having other qualifications needed for the job?
Julia Stoyanovich, WSJ
Robots can look at social media postings, linguistic analysis of candidates’ writing samples, and video-based interviews that utilize algorithms to analyze speech content, tone of voice, emotional states, nonverbal behaviors, and temperamental clues (HBR 2019). In just a few quick seconds, AI uses all the data it has on you to jump to conclusions. AI uses tools that claim to measure tone of voice, expressions, and other aspects of a candidate’s personality to help “measure how culturally ‘normal’ a person is.”
You spend a large amount of time proving to employers that you are not like the others, you’re different/better than other candidates…but now we need to try and convince a robot that we are “normal.”
Researchers predict that face-reading AI can soon discern candidates’ sexual and political orientation as well as “internal states” like mood or emotion with a high degree of accuracy. This can be worrisome if the face reader claims that one is “too emotional” or assigns someone to a certain political party.
Whether one might socially offend us or whether one commits a crime, we face a fundamental tension between punishment and forgiveness. Punishment is important because it acts as a deterrent to the initial offense or to subsequent offenses. But punishment is also costly. Severing social or commercial ties reduces the number of possible mutually beneficial transactions. We lose economies of scale and lose gains from trade when we exclude someone from the market. Forgiveness is important because it permits those who previously had conflict to acknowledge the sunk cost of the offense and proceed with future opportunities for trade. However, an excess of forgiveness risks failure to deter destructive behaviors.
In the US, we enjoy a state that can prosecute alleged offenders and enforce punishments regardless of the economic status of the offended. While not perfect, the state incurs great cost by being the advocate of those who could not enforce great retributive punishment by their own means. A victim may choose to press charges against an offender, or the state can press charges despite a permissive victim.
In fact, our system of prosecution is somewhat asymmetrical. The state can press charges against a suspect, regardless of the victim’s wishes. While a victim can’t compel an unwilling state to press charges, say if the evidence is scant, an individual can engage in litigation against the accused.
Most of the possible combinations of victim and state strategies result in some kind of prosecution of the alleged offender. Except for litigation, our punishments in the US tend not to be remunerative – the victim isn’t compensated for the evils of the offender. ‘Justice’ is often construed as a type of compensation, however.
There are lots of big questions about welfare programs and how to design them. I am not going to answer any of those questions here, but I am going to ask a few, specifically these two:
Should receipt of welfare be means-tested for need (e.g. TANF) or universal (i.e. a minimum income for everyone)?
Should receipt of welfare be conditional on employment?
The good arguments for means-testing usually boil down to maximizing impact. If we have a fixed amount of resources we can redistribute, then we can maximize the impact of those resources by directing them towards the people with the greatest need (rather than spreading it thinner across everyone). The good arguments against means-testing revolve around changing incentives at the margin. Even when designed with gradual phasing out as a person’s income rises, there remains the unavoidable reality that means-tested welfare reduces the value of every marginal dollar earned within the phase-out window.
The good arguments for requiring employment to receive a form of welfare are, again, incentives to work, this time at the extensive margin (i.e. how many people in the population choose to work at all). The good arguments against requiring employment are the obstacles that poverty places between people and finding work. It becomes a classic Catch-22 – you’re poor because you can’t find work, but you can’t find work because you are poor. Welfare unconditional of employment can help people get over the hump and into their next job.
None of these observations are new, and these very much remain hard questions. Yes, we should be concerned with incentives to work at the margin, but the fact remains resources are finite, and many people will, at some point in their lives, need a lot of help. The more we can give them the better. This pushes me towards means-testing. But then I remember that those marginal incentives to work at the intensive margin (how much to work) depend on phasing out of benefits with increasing visible income. For people living in poverty, there exist a number of viable earning options that are not visible to the institutions testing their means. A dollar earned in legal wages might reduce TANF benefits by $0.15, but a dollar earned in the black/gray market leaves those benefits untouched. Yes, illegal earnings come with risks, including future access to benefits, but the possibility remains that means-testing benefits could have the perverse effect of increasing the relative value of any and all “off-the-books” income, be it cash labor in the gray market or explicit criminal earnings.
Considering whether a source of welfare should be conditional on employment raises the question of wage subsidies versus cash welfare, but it’s really just the same question we were pondering previously, but with greater emphasis on incentives to work or not. These questions at the extensive margin, much like those at the intensive margin, become far more interesting when placed in the context of not just whether to workor not, but which market to work in. In a world of prohibitions scattered across a variety of extremely high demand (and high profit) markets, where licensing, educational norms, and discrimination all work to create a collage of cash opportunities outside of the well-lit protections of legal labor markets, there is no shortage of work that goes unseen. Welfare transfers conditional on legal labor can serve as incentive to pull people out of these market, and begin building a record of accomplishment that serve them going forward. This speaks in favor of conditioning welfare on employment, so long as everyone has access to employment.
This this thought experiment leaves me mostly where I started (no surprise, an hour’s reflection rarely changes my priors), namely that wage subsidies have a place in our welfare system, as does means-testing, but as welfare benefits rise, they should become more universal in their access across the income distribution, a lesson I expect we will learn in retrospect as we come out of “The Great Resignation”. A “universal basic income” need not be fully universal, but there are good reasons for it to reach well passed the median income. Or median voter, for that matter.
That sentence is one that I repeat every time I teach economic history. It is repeated because a common misconception in history is that there are “different mentalities”: a pre-capitalist mentality versus a capitalist mentality; a western mentality versus a non-western one etc. The variations are endless but the common denominator is quite simple: there are discontinuities in economic rationality and these discontinuities explain economic change.
That, as I explain to my students, amounts to labelling people of the past as “irrational morons” who would leave $100 bills on the sidewalk. There are no variations in rationality, merely variations in constraints and incentives. That is what I tell my students. And the thing is, that statement is actually testable! Indeed, arguing that something in people’s brain changed is an argument that can never be tested because they are dead and cannot testify. In fact, even if they were alive, their statements would be meaningless because nothing speaks louder than actions (i.e. preferences are revealed by action). Making statements about the rationality of X or Y action is easily testable as we can observe what people did (or do now). And its really easy to refute differences in “mentalities”
Let me give you an example from my native Canada. In Canada, there is a large French minority (the majority of which lives in Quebec) which has long been argued to hold different economic mentalities than the neighboring English majority. Peddled (yes, that is a strong term but I think it applies) by both French and English historians (and economists), this view is used to explain the relative poverty of the French minority (which has historically been 60%-75% as rich as the English majority). As far back as the early 19th century, French-Canadians are argued to have clung on to archaic farming techniques even though they observed better techniques from their English-speaking neighbors. Their “traditional conservative” outlook (in the words of an eminent Canadian historian) pushed into economic stagnation (and even retrogression by some accounts). This view continues today. I vividly remember a debate on French-Canadian TV with former Quebec premier (like a governor for Americans) Bernard Landry telling me that there was a difference between my “anglo-saxon economic worldview” (i.e. neoclassical economics) and that which most French-Canadians held.
The virtue of this example is French-Canadians are deemed to be of a “lesser” mentality than English-Canadians at the same moment in time. Thus, it is easy to test whether this is the case. In multiple works, notably in this paper atHistorical Methods, I have used simple tools from economic theory to assess this lesser mentalities hypothesis. Start from a simple Cobb-Douglas production function:
Where A is technology residual (also known as total factor productivity or TFP), Y is total output, K is the capital stock and L is the labor supply. The exponents are just the elasticities of capital and labor. If the English and French in Canada are separated, there are two production functions (with one for each) and they can be divided by each other. But the neat part about the Cobb-Douglas function here is that you can rearrange the equation and solve in terms of A rather than Y. As A is total factor productivity, it tells us how effectively people combine inputs K and L to produce Y. And then you can express A in the French sector (1) as a ratio of A in the English sector (2) as in the formulation below
Technically, if the French farmers were less efficient than the English farmers the ratio on the left-hand side should be less than 1 (as A1 < A2 ). Using data from the 1831 census of Lower Canada (as Quebec was known then), I compared farms in French areas to farms in English areas. The results? Yes, the French farmers were poorer (income Y1 < Y2 ) but there were very small differences in efficiency of input use (A) between French and English farmers as can be seen in the table below. French areas were 4.3% to 0.5% less efficient than English areas.
But when you controlled for land quality, distance from urban markets, recency of settlement, complementary industries and other controls, there are no statistically significant effect of culture (proxied in the table below by share of Catholics as all French-Canadians in 1831 were Catholic and very few English-Canadians were Catholic). In other words, the small differences have nothing to do with culture or differences in mentalities.
Notice that this was a relatively simple logical test. The farming actions of French-Canadians were observed in the data. We know which inputs they chose to use (in which quantities as well). The results of these actions are easily observable through the output data in the census. Irrationality on their part is thus easy to test as a simple Cobb-Douglas model suggest that irrationality would be manifest by an inferior ability to use and combine inputs. They used inputs equally well as English Canadians and so that claim of inferior mentalities was wrong.
One could reply that I am just picking an easy case to dismantle the “mentalities” claim. But I am actually late to that party by adding the French-Canadians. Similar claims have been made for Russian, French, Italian, Chinese, Vietnamese, Korean, Mexican, Indian, Polish, New Englanders (yes, you read right), Danish, Irish, Kenyans, Algerians, Egyptians etc. Hundreds of economic historians and economists have shown that these cases do not hold.
If you wish to explain economic change (or economic disparities), you have to look elsewhere than “changes in mentalities” (or differences in mentalities). If you dont, you are essentially claiming that people of the past were irrational morons who simply lacked your expert knowledge.
A recent headline in the Dartmouth student newspaper reads, “Dartmouth’s endowment posts 46.5% year-over-year returns, prompting additional spending on students”. That seems like really great investing performance. But the sub-headline dismisses it as less-than-stellar, by comparison: “The endowment outpaced the stock market, but fell short when compared to other elite universities that have announced their endowment returns.” After all, fellow Ivy League university Brown notched a 50% return for fiscal 2021, which in turn was surpassed by Duke University at 55.9% and Washington University in St. Louis at 65%. The Harvard endowment fund managers are a bit on the defensive for gaining “only” 34% on the year.
The stock market has done well in the past year, but nothing like these results. What is the secret sauce here? Well, it starts with having money already, lots of it. That enables the endowment managers to participate in more esoteric investments. This is the land of “alternative investments”:
Conventional categories include stocks, bonds, and cash. Alternative investments include private equity or venture capital, hedge funds, managed futures, art and antiques, commodities, and derivatives contracts. Real estate is also often classified as an alternative investment.
It takes really big bucks to buy into some of these ventures, and it also takes a large professional endowment fund staff to choose and monitor these sophisticated vehicles. Inside Higher Ed’s Emma Whitford notes:
Endowments valued at more than $1 billion, of which there are relatively few, are more likely to invest in alternative asset classes like venture capital and private equity, recent data from the National Association of College and University Business Officers showed.
“Where you’re going to see higher performance are the institutions with endowments over a billion,” Good said. “If you look at the distribution of where they’re invested, they have a lot more in alternative investments — in private equity, venture capital. And those asset classes did really well. Those classes outperformed the equity market.”
…Most endowments worth $500 million or less invested a large share of their money in domestic stocks and bonds in fiscal 2020, NACUBO data showed. This is partially because alternative investments have a high start-up threshold that most institutions can’t meet, according to Good.
“You have to have a pretty big endowment to be able to invest in that type of asset class,” he said. “If you have a $50 million endowment, you just don’t have enough cash to be able to buy into those investments, which is why you won’t see big gains from alternatives in those smaller institutions.”
Virginia L. Ma and Kevin A. Simauchi report in The Crimson on Harvard’s Endowment, “Harvard Management Company returned 33.6 percent on its investments for the fiscal year ending in June 2021, skyrocketing the value of the University’s endowment to $53.2 billion, the largest sum in its history and an increase of $11.3 billion from the previous fiscal year.” This 33.6% gain, though, represents underperformance compared to Harvard’s peers; this is rationalized in terms of overall risk-positioning:
However, Harvard’s returns have continually lagged behind its peers in the Ivy League, a trend that appeared to continue this past fiscal year. Of the schools that have announced their endowment returns, Dartmouth College reported 47 percent returns while the University of Pennsylvania posted 41 percent returns.
Narvekar acknowledged the “opportunity cost of taking lower risk” in Harvard’s investments compared to the University’s peer schools.
“Over the last decade, HMC has taken lower risk than many of our peers and establishing the right risk tolerance level for the University in the years ahead is an essential stewardship responsibility,” Narvekar wrote.
In 2018, HMC formed a risk tolerance group in order to assess how the endowment could take on more risk while balancing Harvard’s financial positioning and need for budgetary stability. Under Narvekar’s leadership, HMC has dramatically reduced its assets in natural resources, real estate markets, and public equity, while increasing its exposure to hedge funds and private equity.
There it is again, the magical “hedge funds and private equity”.
Harvard’s fund manager went on to warn that the astronomical returns of the past year were something of an anomaly:
At the close of his message, Narvekar cautioned that despite the year’s success, Harvard’s endowment should not be expected to gain such strong returns annually. “There will inevitably be negative years, hence the importance of understanding risk tolerance.”
The following chart illustrates, at least in Harvard’s case, how extraordinary the past year has been:
The fiscal year of these funds typically runs September to September, so it’s worth recalling that back in September of 2020 we were still largely cowering in our homes, waiting for vaccines to arrive. The equity markets were still down in September of 2020, whereas a year later the tsunami of federal and Fed largesse had lifted all equity boats to the sky. So, it is not realistic to expect another year of 50% returns.
Final issue: can the little guy pick up at least a few crumbs under the table of this private equity feast? In most cases, you have to be an “accredited investor” (income over $200,000, or net worth outside of home at least $1 million) to start to play in that game. From Pitchbook:
Private equity (PE) and venture capital (VC) are two major subsets of a much larger, complex part of the financial landscape known as the private markets…The private markets control over a quarter of the US economy by amount of capital and 98% by number of companies….PE and VC firms both raise pools of capital from accredited investors known as limited partners (LPs), and they both do so in order to invest in privately owned companies. Their goals are the same: to increase the value of the businesses they invest in and then sell them—or their equity stake (aka ownership) in them—for a profit.
Venture capital (VC) is perhaps the more attractive, heroic side of this investing complex:
Venture capital investment firms fund and mentor startups. These young, often tech-focused companies are growing rapidly and VC firms will provide funding in exchange for a minority stake of equity—less than 50% ownership—in those businesses.
Some examples of VC-backed enterprises include Elon Musk’s SpaceX, and Google-associated self-driving venture WayMo.
Venture capital takes a big chance on whether some nascent technology will succeed (in the fact of competition) many years down the road, which has the potential to make the world a better place for us all. Private equity, on the other hand, tends to be somewhat more prosaic, predictable, and sometimes brutal. Here is putting it nicely:
Private equity investment firms often take a majority stake—50% ownership or more—in mature companies operating in traditional industries. PE firms usually invest in established businesses that are deteriorating because of inefficiencies. The assumption is that once those inefficiencies are corrected, the businesses could become profitable.
In practice, this often entails taking control of a company via a leveraged buyout which saddles the new firm with heavy debt, firing lots of employees, improving some strategy or operations of the firm, and sometimes breaking it up and selling off the pieces. This was the fate of several medium-sized oil companies that got in the cross-hairs of corporate raider T. Boone Pickens. “Chainsaw Al” Dunlop also became famous for this sort of “restructuring” or “creative destruction”.
Private equity activities can be very lucrative. But again, is there any way for you, the little guy, to get a piece of this action? Well, kind of. There are publicly traded companies who do this leveraged buyout stuff, and you can buy shares in these companies, and share in the fruits of their pruning of corporate deadwood. Some names are: Kohlberg Kravis Roberts (KKR), The Carlyle Group (CG), and The Blackstone Group (BX). The share prices of all these firms have more than doubled in the past year (100+ % return). If you had had the guts to plow all your savings into any one of these private equity firms a year ago, you would have had the glory of beating out all those university endowment funds with their piddling 50% returns.
The Take Economy demands not just that you distinguish yourself with opinions that deviate from the median person, but that the manner in which your opinion deviates is immediately distinguishable from everyone else who is similarly deviating. This leaves us with a tendency to focus on what we don’t like – enjoying something is further evidence of the monoculture, while hate comes in a million shades of beige.
I bring this up because hating Thanksgiving foods, particularly turkey, oven-baked turkey, has been in vogue for years, and I’m sure stuffing is next. Everything is too dry, too bland, yada yada yada. It’s a boring take most often made by boring people. Not that such things usually matter to me, but in this case it does because Thanksgiving as a meal is not an epicurean holiday, it’s an attempt to solve a coordination game across families, friends, and geographies. When solving a coordination problem with so many players, with preferences and cost-constraints that make broadly amenable large-scale get-togethers increasingly difficult. Between navigating travel costs, sleeping arrangements, and the inevitable negative political externalities that some jackass in your family is going to pollute the familial air with, the last thing you have the resources to cope with is culinary coordination. So what do we do? We come up with a pseudo-national, heavily regional menu that we coordinate on, a$1.99 per pound Schelling point that’s a steal at thrice the price.
The turkey’s too dry? Drown it in gravy. The stuffing is too bland? Your aunt has hot sauce in her purse. Your cousin is explaining the vagaries of 18th century 2nd amendment judicial rulings? There’s a bottle of brown liquor quietly being shared on the porch this very minute that you can partake in for the price of nothing more than a pleading glance and keeping your politics to your self.
The food isn’t the point, but if you’re still feeling the pain of a sub-optimal meal, you can order Chinese with us later, and I’ll happily explain to you why you’re not just ordering the wrong dishes, you’re ordering off the wrong menu. Because I got food takes, just not when the meal isn’t about the food.
I had never spoken with someone so enthusiastic about what they could do with trash. A young, slender man from a city in India I was unfamiliar with explained to me how his machine transformed plastic refuse into an economically and environmentally superior substitute for concrete. This sort of techno-optimism fodder for “your daily reason to feel good” clickbait article provokes, but rarely maintains my optimism upon closer inspection. I listened, impressed by the person I was talking to, but unsure what to make of his elevator pitch.
Then he reached into his backpack and handed me a very real block of “Plascrete”. We’ve spent so much of the last 20 years inundated with ideas that were abstract software application propositions at best, and vaporware at worst, it was jarring to hold the physical manifestation of someone’s “big idea”. I spent the rest of the conversation, and a good chunk of my evening at the “unconference” being hosted by Emergent Ventures, thinking about the economic ramifications of Plascrete. What it could mean for developing countries to have a substitute for concrete that is 24 times stronger yet somehow 4 times cheaper – what it would mean for infrastructure and vertical housing construction. What the streets might look like picked clean of plastic bags and refuse. What happens to the lower tail of the wage distribution after the marginal product of trash-picking labor quintuples. How forecasted carbon emissions from developing countries might shift if the expected carbon footprint of construction were massively reduced.
But this post isn’t about Plascrete or the projected impact of any particular innovation. What I’m interested in this moment is the market for private venture capital.
The model of modern venture capital is built around the biggest of wins, those home run investments whose returns compensate for the more than 90% that largely fail. For the strategy to function, of course, means that every investment has to carry the possibility of prodigious returns, in the realm of 10 to 20-times investment, which limits the industries and technological categories under consideration. Tight profit margins are out. Factories, physical capital are out. Anything that might carry an inherent limitation to rapid scaling are out. What’s in are network consumer goods and zero marginal cost (e.g. software) products. So what does that leave out? Explicitly physical goods, such as inputs into shelter or food, things that require upfront investment in equipment where those costs increase with the scale of your output aspirations.
But that’s actually only the beginning of our problems. What about goods with enormous positive externalities, i.e. social benefits, that exist without the possibility of traditional property rights and monetization? Even if a private venture fund is culturally interested in such things, they are constrained by their model – any reduction in potential home run returns from their investment puts the short run solvency of their fund at risk, something unlikely to be tolerated by their investors. These problems are only compounded when considering positive externality generating technologies that are burdened with traditional physical capital needs and historically normal limits to scaling. Even if your product offers 100X social returns, that’s not going to keep the lights on for a series of high risk investments with private returns that top out at 5X.
Innovations whose adoption offers enormous positive externalities are, in theory, exactly why public support for general science exists (whether or not such things should fall within the domain of public funding agencies is a whole different question that I have no immediate interest in addressing). Let me simply say here that these hypothetical products require expertise in delivering a product to market and the capacity to appropriately take on risk. These are not the comparative advantages of large federal science funding agencies. Which leaves us with the dilemma motivating this entire rambling thought exercise. There seems to be an important gap in the market- and government-based institutions for funding innovation.
What I want to consider is the possibility of elevating the status and profile of private venture capital that goes towards profitable, self-sustaining technologies whose returns might only be considered prodigious if we include the broader positive externalities they have on human lives. The kinds of effects whose value may in fact scale exponentially as they diffuse through communities and networks, but will never be internalized into profits via property rights. I want to consider the reconstruction of the risk profile of an entire portfolio to optimize the ability of a fund to support these sorts of innovations in perpetuity. Earning returns sufficient to produce returns sufficient to self-sustain with a minimum of (if any) long run philanthropic subsidy.
Private capital with such a focus would find a niche that modern venture funds are unmotivated to serve and public scientific agencies are ill-equipped to support. Private funds focused on innovations with externally scalingreturns would, in my half-baked hypothesis, would take on a two-tier model. The first tier would be composed of small investments scattered across a large number of very small grants (which is essentially the entire model of Emergent Ventures – I’m essentially plagiarizing the model I saw evidence of through two days of conversations with their grant recipients). These grants would predominantly be interested in people. These human lottery tickets would pursue their initial ideas through the proof-of-concept stages. Some would succeed, most will not, but all will benefit from their first connection to the broad international network of technology talent and talent-seekers. The small number who do succeed in producing compelling evidence of technological advancement would then enter the second tier, where large investments would be sought for a prototype and eventual distribution. What’s important to remember is that this remains a private good that must still enter and pass the market test. What distinguishes it is not an inability to economically self-sustain, but rather it’s inability to create profits so grandiose that it can subsidize a portfolio of failed moonshots. It’s prospective profitability need only justify it’s own independent risk of failure. This is not to say the bar is actually lower than traditional venture capital. While the profitability bar is lower, it must exceed a second, in many ways more difficult bar – it must produce a direct and attributable positive externality, be it through health, safety, or environmental channels. Its consumption must improve lives of not just its consumers, but those entirely uninvolved in its production or purchase.
I’m not an expert in venture capital or speculative philanthropy, but after the last week I can’t shake the idea that Michael Kremer was even more right than we realized: more people => more ideas => more economic growth. There are billions lottery tickets lying on the ground all over the developed world. We need to invent newer and better ways of picking more of them up.
Wranglings over spending plans, deficits and public debt increases have been quite intense of late. What is quite surprising, at least at first glance, is that there are so few individuals who are arguing for reducing public spending. Right now, the most “hawkish” policy stance is a slower rate of spending increases. Why the pro-spending tilt of debates?
One could argue that its the pandemic. A crisis is, after all, a natural moment to increase spending. However, that argument is a bit weak now. This position was easily defensible six to twelve months ago, but not today when the economy is starting its recovery. If anything, as recovery is underway, the case for slashing spending levels is stronger than the case for raising them.
So, once again, why the pro-spending tilt? Let me point to the work of James Buchanan and Richard Wagner in Democracy in Deficit. In this work, whose lessons are underappreciated today, Buchanan and Wagner argue that there is an asymmetry in the political returns to fiscal policy. This is known as fiscal illusion. When a deficit occurs, the costs are delayed and thus harder to observe. The benefits are immediate. When a surplus takes place, the benefits are delayed and the costs (i.e. less spending, higher taxes) are immediate.
Second, there’s the far more serious threat of fiscal illusion—that the public’s perception of the true costs and benefits of government expenditures is misconstrued. As long as the costs of taxation are underestimated and that the benefits of public expenditures are overestimated, there’s fiscal illusion. The nature of politics thus creates a strange incentive system where governments reap more electoral rewards from deficits than from surpluses. If you buy Buchanan and Wagner’s explanation, the pro-spending tilt is easy to explain.
However, the empirical evidence for this is somewhat limited. For example, Alberto Alesina in the 1990s, showed that he could not find empirical patterns that confirmed Buchanan and Wagner’s theorizing. But, I have recent work (co-authored with Marcus Shera of George Mason University — a good graduate student of mine) which proposes a simple mechanism by which to observe whether the first condition for a pro-deficit/pro-spending tilt is present.
American presidents are incredibly mindful of their historical reputation. As I argued elsewhere, presidents consider historians as a constituency they want to cater to so as to be remembered as great. If there is a reward from engaging in deficit spending given by historians, this would suggest that presidents have at least some incentives to be fiscally imprudent. Phrased differently, such returns by historians would suggest some divergence between what is fiscally prudent and what is politically beneficial.
Using the surveys of American presidents produced by C-Span and the American Political Science Association, Marcus and I found that there are strong rewards to engaging in deficit spending. Without any controls for the personal features (e.g. war hero, scandal, intellect) of a president and the features of a presidency (e.g. war, victory in war, economic growth), an extra percentage point of deficit to GDP is associated with a strong positive reward to a president (see table below). Once controls are introduced, the result remains: there are strong rewards from engaging in deficit spending.
Thus, at any time, a president who is mindful of his place in the history books would be tempted to engage in deficit spending. While Marcus and I are somewhat cautious in the paper, I do think that we are presenting a “lower-bound” case for a pro-deficit bias. Indeed, one could think that the hindsight of history would lead to greater punitions for fiscal recklessness. After all, historians are not like voters — their time-horizons for evaluating a presidency are clearly not as short. If that is the case, one should expect that historians should be less likely to reward deficits. And yet, they seem to do so — which is why I argue this is a lower-bound case.
In other words, Joe Biden might simply believe that the extra spending will secure him a place in history books. If other presidents are any indication, he is making a good bet.
Recently, I’ve been buying a lot more non-durable goods when they are on sale. Whereas previously I might have purchased the normal amount plus one or two units, now I’m buying like 3x or 4x the normal amount.
What initially led me here was the nagging thought that a 50%-off sale is a superb investment – especially if I was going to purchase a bunch eventually anyway. I like to think that I’m relatively dispassionate about investing and finances. But I realized that I wasn’t thinking that way about my groceries. The implication is that I’ve been living sub-optimally. And I can’t have that!
If someone told me that I could pay 50% more on my mortgage this month and get a full credit on my mortgage payment next month, then I would jump at the opportunity. That would be a 100% monthly return. Why not with groceries? Obviously, some groceries go bad. Produce will wilt, dairy will spoil, and the fridge space is limited. But what about non-perishables? This includes pantry items, toiletries, cleaning supplies, etc.
Typically, there are two challenges for investing in inventory: 1) Will the discount now be adequate to compensate for the opportunity cost of resources over time? 2) Is there are opportunity cost to the storage space?
For the moment, I will ignore challenge 2). On the relevant margins, my shelf will be full or empty. I’ve got excess capacity in my house that I can’t easily adjust it nor lend out. That leaves challenge 1) only.