Bryan Caplan recently wrote about public goods theory, how we teach it, and the unrealistic nature of how we classify goods as either/or, rather than on a continuum. I explored similar themes in a blog post that I wrote back in January, but Caplan brings up another important point about public goods theory that I forget.
In a short 2002 paper, and then in a 2003 book with the same title, Foldvary and Klein proposed the idea of “the half-life of policy rationales.” In brief, the justification for many market failure arguments is contingent on the current state of technology. They apply this to concepts such as natural monopoly and information asymmetries, but for public goods theory the most important application is to the concept of excludability.
Here’s the basic idea: it is costly to exclude non-payers for using some goods. If it is so costly that it would not be profitable for a private enterprise to produce the good in question, it won’t be produced privately. But it still may be efficient for government to produce the good, if the benefit from the good exceeds the cost of raising the revenue to pay for it (likely out of general revenue, since we have already admitted it is infeasible to charge the users directly).
But here’s the Foldvary and Klein point: all of the above paragraph is dependent on the current state of technology! Take roads for example. When you had to pay someone to physically take a few coins for a toll road, plus force all motorists to slow down to a complete stop to pay the toll, it was probably cost prohibitive to operate limited-access private toll roads. But technology changes. We now have the technology for electronic tolling done at highway speed (and even coin buckets were slightly faster than handing some dude your change). The argument for government provision of highways, which was strong when technology was ancient, is significantly weakened now that technology has reached its modern state.
(There may be lots of other reasons you think that roads should be publicly provided, such as equity, but these are separate questions and distinct from the argument made in standard public goods theory.)
Foldvary and Klein go through many more examples in their book, but we can already see the key insight. And I think this is extremely important for teaching public goods to undergraduates. It’s normal for us to say that goods are either excludable (in which private provision is best) or non-excludable (in which there is a strong case for some government intervention). But this either/or framing is wrong (a continuum is a better way to think about it), and crucially it can change over time depending on technological changes. Excludability is not some inherent feature of a good or service, it is a function of the state of technology.
Someone wrote a story about my life. It’s a report from The Verge called “File Not Found: A generation that grew up with Google is forcing professors to rethink their lesson plans”.
When I started teaching an advanced data analytics class to undergraduates in 2017, I noticed that some of them did not know how to locate files on a PC. Something that is unavoidable in data analytics is getting software to access data from a storage device. It’s not “programming” nor is it “predictive analytics”, but you can’t get far without it. You need to know what directory to point the software to, meaning that you need to know what directory contains the data file.
As the article says
the concept of file folders and directories, essential to previous generations’ understanding of computers, is gibberish to many modern students. It’s the idea that a modern computer doesn’t just save a file in an infinite expanse; it saves it in the “Downloads” folder, the “Desktop” folder, or the “Documents” folder, all of which live within “This PC,” and each of which might have folders nested within them, too. It’s an idea that’s likely intuitive to any computer user who remembers the floppy disk.
I am a long-time PC user. Navigating File Explorer is about as instinctive as drinking a glass of water for me. The so-called digital natives of Gen Z have been glued to mobile device screens that shield them from learning anything about computers.
Not everyone needs to know how computers work. I myself only know the layer that I was forced to learn.
My Dad, to whom I owe so much, kept a Commodore 64 in a closet in our house. About once a year, he would try to entice me into learning how to use it. I remember screwing up my 9-year-old eyes and trying to care. Care, I could not. It’s hard to force yourself to do extra work without a clear goal. The Verge article explains
But it may also be that in an age where every conceivable user interface includes a search function, young people have never needed folders or directories for the tasks they do. The first internet search engines were used around 1990, but features like Windows Search and Spotlight on macOS are both products of the early 2000s. Most of 2017’s college freshmen were born in the very late ‘90s. They were in elementary school when the iPhone debuted; they’re around the same age as Google. While many of today’s professors grew up without search functions on their phones and computers, today’s students increasingly don’t remember a world without them.
One area in which I do minimum archiving is my email. I rely heavily on the search function. I could spend time creating email folders, but I’m not going to put in the time unless I’m forced to.
Here’s where the “problem” lies:
The primary issue is that the code researchers write, run at the command line, needs to be told exactly how to access the files it’s working with — it can’t search for those files on its own. Some programming languages have search functions, but they’re difficult to implement and not commonly used. It’s in the programming lessons where STEM professors, across fields, are encountering problems.
Regardless of source, the consequence is clear. STEM educators are increasingly taking on dual roles: those of instructors not only in their field of expertise but in computer fundamentals as well.
Personally, I don’t mind taking on that dual role. I didn’t learn to program until I really wanted to. The only reason I wanted to was that I had discovered economics. I wanted to be able to participate in social science research. Let these STEM or business courses be the motivation for students to learn to use computers as tools instead of just for entertainment.
Allen Downey wrote a great blog on this topic back in 2018 that is more practical for teachers than the Verge report. He argues that learning to program will be harder for the 20-year-olds of today than it was for “us” (old people as defined by entering college before 2016). He recommends a few practical strategies, while acknowledging that there is “pain” somewhere along the process. He thinks it is sometimes appropriate to delay that pain by using browser-based programming interfaces, in the beginning.
I gave my students a break from pain this week with a little in-browser game that you can play at https://www.brainpop.com/games/blocklymaze/ They got 10 minutes to forget about file paths, and then it was back to the hard work.
I have found that a lot of students need individual attention for this step – the finding a file in their hard drive. I only have to do that once per student. Students pick the system up quickly. File Explorer is a pretty user-friendly mechanism. Everyone just has to have a first time. Sometimes, Zoomers just need a real person who cares about them to come along and say, “The file you downloaded exists on this machine.”
One way around this problem is to reference data that lives on the internet instead of in a local machine. If you are working through the examples in Scott Cunningham’s new book Causal Inference, here’s a piece of the code he provides to import data from his public repository into R.
The nice thing about referencing data that is freely available online is that the same line of code will work on every machine as long as the student is connected to the internet.
As more and more of life moves into the cloud, technologists might increasingly be pointing programs to a web address instead of the /Downloads folder on their local machine. Nevertheless, the kids need to have a better sense of where files are stored. He or she who can understand file architecture is going to get paid a lot more than their peers who only know who to poke and scroll on a smartphone.
There is a future scenario in which AI does most of the programming for us. When AI can fetch files for us, then File Explorer may seem obsolete. But I worry about a world in which fewer and fewer humans know where their information is stored.
Bo Burnham is a comedian and musician who, like so many of the artists I enjoy, produces art that I can only describe as extremely specific to him. His newest special on Netflix features a song, “Welcome to the Internet” (some NSFW lyrics), that I liked so much I thought it was worth writing as a formal model.
No, really. Hey, we all need a hobby.
The whole song is a meditation on the overwhelming nature of the internet and is, in my opinion, fantastic. I think if we zero in on two pieces of refrain in the lyrics, we can get some traction in what Burnham believes is the underlying problem, if not outright crisis, that resides within the internet and those that are “extremely online”:
First, the lure:
Could I interest you in everything? All of the time? A little bit of everything All of the time
This is the value-add of the internet and why we can never, and will never, leave it behind willingly. This is also the “cognitive overload” hypothesis of why the internet is bad. Sure, for the infovores of the world there hasn’t been a bigger technological shift since the printing press, but there certainly exists the possibility that most human minds (if any) aren’t built to handle the deluge of information they are drowning in. That’s one theory, but I think that’s the kind of problem that isn’t actually a problem. Some will consume more of the internet, some will consume less, c’est la vie.
It’s in the second half of the refrain, however, that we see the actual problem.
Apathy’s a tragedy And boredom is a crime Anything and everything All of the time
And therein lies the rub. You can’t opt out. But is that true? Well, that depends on who you are and how you live your best life i.e. how you optimize your utility function. So let’s do it. Let’s write down the utility function that lives inside the song. What we’re going to do is this- we’re going to lay out the simple components as natural language, then turn it into formal math, and then bring it back to natural language.
In our Burnhamian mode, people need two things: Private goods like food and shelter and Social Goods like friendship and camraderie. How much Utility you enjoy will always be increasing in both, but the optimal mix will depend on your constraints (wealth, time, accessible population) and the mathematical function determining how much Utility you get from a mix of Private and Social goods i.e. are they additive, multiplicative, or something else. Utility equal to zero is equivalent to death.
Let’s add one last layer of complexity. Let’s say that your Social goods are a function of two kinds of elements: Friends and Clubs. Friends are direct, one-to-one relationships. Clubs are large social groups. We will define and differentiate between the two as such: if you cease to be part of a friendship (whether between 2,3, or 5 people), then that friendship no longer exists in the same form. If you drop out of a club, on the other hand, that club will persist without you.
So what a person has to do is, within their constraints, try to optimize how much of their resources they invest in their Private goods, their Friends, and their Clubs.
The first line is our base model, the second is an expanded version with our two-input model of Social goods. The function we are using is called a Constant Elasticity of Substitution utility function. The key parameter, α, determines how Private and Social goods interact. If α=1, then they are what economists call perfect substitutes. All that matters is how much you have in total of the two inputs, and if you want you could specialize in just one of them. They are perfectly additive. If, on the other hand, α=-∞ (negative infinity), then they are perfect complements, like right and left shoes. There is no point in adding even one more unit of Private goods until you have another unit of Social goods to pair with it. In a sense, they are multiplicative, meaning if either value is zero, then your utility is zero. The value of α will tell us whether the best life requires more of a mix of Social and Private inputs (if they are more complementary), or simply the most of whatever is the easiest to come by (if they are good substitutes for each other).
We’ve nested in our Friends and Clubs production of Social goods as a CES function within the second equation, with a similar story, only here β will determine how much of a mix of Friends and Clubs we want, or whether we can specialize more in one over the other. In the third and last line of the model, we’ve reduced it down to the underlying questions that will tell our story represented by addition and multiplication signs:
Are Private and Social Goods complements (multiplicative) or substitutes (additive) when we internally produce utility? Are Friends and Clubs complements or substitutes when we internally produce our Social goods?
Assumption 1: α= -0.1 Private and Social goods are weak complements. What this means is that there are diminishing returns to Private and Social goods, you need some of both, but you can have less of one or the other and its fine. Let’s just assume wealthy people need other people in their lives to stay sane while, at the same time people with rich social lives and supportive communities still need food and shelter. You can specialize a bit more on one side, depending on what’s available, but you can’t live without at least some of both.
We’re all different in how we build our social lives and, in turn, how we internalize the internet in our lives. I think we can gain some insight into this process by working out the stories in this simple model through our second parameter, β. Let’s consider three broad types of people.
Person Type 1: Friends and Clubs are Strong Substitutes (high β)
With regards to our original question, people who hyperspecialize in their club and club identity will be constantly contributing grist to the club’s identity: evidence of the necessity of the club and it’s mission, rage at non-members, disappointment in members who aren’t committed enough, and constant vigilance in the monitoring of everyone else’s commitment. They are in it, they are of it, and they are ready to purge.
Apathy’s a tragedy(You must care about everything the club cares about) And boredom is a crime(All of your time must be allocated to the club) Anything and everything All of the time
Type 2: Friends and Clubs are strong complements (low β)
These are the people that I think Burnham’s song is targeted at, for whom he has the most sympathy, and with whom I suspect he would count himself. These are people for whom the internet is the most taxing, the most exhausting to navigate.
Type 2 folks want to have personal friendships and friend groups while still feeling a part of something bigger, whether it’s a community, a political movement, or spiritual affiliation. Type 2 people will have preferences towards one or more social identities manifested as clusters on the internet, but they don’t want to purge people who don’t share those preferences from their circle of friends. Type 2 folks are interested in civil rights and social justice, but they want to diversify their emotional and material resources across their personal relationships and private wellbeing as well.
The deluge of the internet, with its stark images, focus on extreme outcomes, battle cries, and public reputation mauling, are constantly admonishing and shaming Type 2’s. Type 2 people are tired. Perhaps most importantly, the pandemic has been especially hard on Type 2’s. While Type 1 club-specialists have thrived by focusing the totality of their efforts to the online arena, their voices have been tearing the Type 2 social-portfolio diversifiers to shreds.
Type 3: Friends and Clubs are weak complements (middle β)
Type 3 people are a lot like Type 2’s, but it is easier for them to compartmentalize the production of their social goods. Type 3 people are often in clubs, but they are rarely ofclubs. They’re not joiners. Whether you’re looking at sacrifice-demanding religious cults or extremely-online political culture warriors, if the social associations of the world demand too much of Type 3 people, they are happy to half-ass their contribution or opt-out entirely. They might be on Twitter or Facebook, but they don’t need to reply to anyone. They might go to church on Sunday with the family, but if the minister tells them their sister is going to hell for their sexual preference, it’s just not that costly to stop going. For them clubs will always remain a luxury good, never a necessity.
To be clear this post is an exercise in building a toy model of something big and complex and important. It’s a gross abstraction and shouldn’t be taken too seriously. The process of formalizing your thinking on a social mechanism, however, is something that I think you should take very seriously. Formal models are useful because there is no hiding what your idea actually is. There’s no “sorry, you misread me” or reliance on obscure jargon. Formal models force you to clarify and reveal your thinking to everyone, including yourself. They can open up new avenues for explorations and even generate empirically testable predictions. Formal models have in many ways been the principal force behind economic imperialism in the social sciences. Not because the math is perfect or all encompassing or even correct. It’s because it’s all out there, ready to be judged and dissected and tested. That transparency makes it a useful.
I don’t know if my interpretation of Bo Burnham’s theory of the internet is correct or even necessarily what he intended it to be. But this is one way we can take it a step forward and see what we can actually learn from it. Which is pretty much all I want to do for the rest of my research life, on every topic, all of the time.
When there’s only one employer in town who hires for jobs like yours, they have labor-market power, and can pay less and have worse working conditions than a competitive firm would. Economists call this “labor market monopsony” but I like the term “employer power”, which is simpler and makes sense when there are a few employers as well as when its literally just one. This keeps down the wages of machinists at the only factory in town, nurses at the only hospital in town, and professors at the only university in town.
Of course, workers in this situation could always move and get a better job elsewhere, and this does put some limits on employer power, but many workers have strong preferences to stay in their home, which means the balance of power is with the employers- or at least, it has been.
The growth of remote work means that workers can get jobs all over the world (or at least all over nearby time zones) without having to leave their town. Which means that monopsony is over, at least for jobs where remote work is possible.
I’m going up for tenure at my college soon, meaning that by next June they will tell me either that I have a job for life or that I’m fired. This “up or out” system naturally causes a lot of anxiety for professors. Partly this is because many professors’ identities are wrapped up in our jobs to an unnecessary and unhealthy extent, and so we take it as a judgement on our worth as human beings. But partly there was always the very practical problem that failing tenure almost certainly meant you would either need to move, accept a substantially worse job, or both.
The thinness of the academic labor market means that unless you live in a major city, its probably the case that no university nearby is hiring tenure-track academics in your subfield this year; and even if you are in a major city, there are probably only 2-3 searches in your field, and they will be so competitive that you almost certainly won’t get the job. To have a real chance at another good academic job, people need to apply nationwide (when I got my first job I sent out 120 applications all over the country to get 1 offer). Getting another job locally generally means taking a job with much worse pay, worse conditions, or both- like high school teacher, adjunct professor, or entry-level business analyst. Those in relatively practical fields like economics were able to get decent jobs outside of academia (PhD economists in private sector and government jobs typically earn better salaries than academics, at the cost of working more hours with less freedom), but such jobs were plentiful only in a few major cities (DC, SF, NYC, Boston), which usually still meant moving. Even in a mid-sized state capital like Providence, I don’t think I’d have an easy time finding something here- or I didn’t think so, until remote work became ubiquitous last year.
Now I won’t be losing any sleep over the possibility of losing my job next year. Partly I think my odds of getting tenure are good, but even a 1% chance of losing my job would have been worrisome in the pre-remote world. Now instead of worrying I just think about the huge range of opportunities in tech, finance, consulting, business, think tanks, and even government. Remote also addresses one big reason I ignored those jobs in the first place and only applied in academia- flexibility. I didn’t want to be stuck in an office 40+ hours/wk; I wanted to be able to pick my kids up from school. Now flexible hours and the ability to be evaluated on output rather than time spent at the office seem to be increasingly common.
To the extent that remote work puts a dent in employer power we would expect to see higher employment, higher wages, and fewer people feeling trapped in their jobs. We’ve seen all of these in 2021- quits in particular are at an all-time high, a good sign that workers don’t feel trapped- though much this could simply be due to the rapid economic recovery. The real test will come when we see how much this is sustained past the initial recovery, and whether it is mainly in remote-able jobs or is a broad improvement.
We picked up a yard sale book: People and Places: A Random House Tell Me About Book.* When I saw that the U.S.S.R. was a huge swath across the northern hemisphere (drawn as a Mercator projection), I checked the publication date. It was published in New York in 1991 by Random House.**
This content would have been considered uncontroversial knowledge for children. It was written by Boomers for Millennials, one year before The End of History came out.***
The first fact discussed is that the earth had about 5 billion people and they saw no end to population growth. The book states that the world could be up to 15 billion people within 60 years (which would be 2050). Today, it is predicted that world population will peak soon and then decline. Fertility rates in most rich countries are currently below replacement and birth rates are falling everywhere. I guess the authors didn’t see that coming.
On the next page is a matter-of-fact explanation that A.D. stands for Anno Domini. If there was a new edition printed today, they would likely follow the academic trend of using BCE/CE, to avoid referencing religion.
Much of the book is about culture, with illustrations. In today’s terminology, this might be considered an attempt at color-blindness. All of the major world religions are presented next to each other with a neutral/positive spin on each. Racial and gender representation is carefully balanced, like the stock images I grew up with in American public school.
Considering how many students were forced to learn remotely this year, I liked the section on the Australian School of the Air. Remote farm children talked to a teacher by radio and sent written work by mail.
At the end is the answer to, “How will we live in the future?” Jeff Bezos might be happy to know that they predict space travel will be more common and people will live in space colonies. The stated reason for space colonization was the predicted unrelenting population growth. There wasn’t a hint of pessimism about, for example, global warming.
Their diagram of a futuristic house has a “Main computer” prominently featured. They predicted that computerized machines would do more work for humans, which has already happened in the past 30 years. The idea of mobile computers and internet services was probably not considered. They imagined house-bound clunky robots that could follow simple instructions.
We all recognize that in the Internet Age, it is easy to communicate and to access information.
For the infovores, this is a cause for celebration.
Others worry that this leads to “information overload”, and to the spread of “disinformation” and “misinformation”. While this is clearly true, complaints about it typically seem to come from elites longing for the days when they had the only microphone, before the Revolt of the Public. Its hard to banish “misinformation” without screening out differences of opinion and correct contrarians even if you want to- and for some, such “collateral damage” would in fact be the main goal. But clearly something is wrong with the current information environment.
The sample size for this study is only 36, so we should think of it as preliminary work toward understanding how people learn to program.
Their abstract, with emphasis added by me:
This experiment employed an individual differences approach to test the hypothesis that learning modern programming languages resembles second “natural” language learning in adulthood. Behavioral and neural (resting-state EEG) indices of language aptitude were used along with numeracy and fluid cognitive measures (e.g., fluid reasoning, working memory, inhibitory control) as predictors. Rate of learning, programming accuracy, and post-test declarative knowledge were used as outcome measures in 36 individuals who participated in ten 45-minute Python training sessions. The resulting models explained 50–72% of the variance in learning outcomes, with language aptitude measures explaining significant variance in each outcome even when the other factors competed for variance. Across outcome variables, fluid reasoning and working-memory capacity explained 34% of the variance, followed by language aptitude (17%), resting-state EEG power in beta and low-gamma bands (10%), and numeracy (2%). These results provide a novel framework for understanding programming aptitude, suggesting that the importance of numeracy may be overestimated in modern programming education environments.
Learning Python, at least at first, is more like learning a foreign natural language than it is like doing arithmetic problems.
There are still many open questions in this area, so I see this paper as an important small step in the right direction. I have also done a study on this topic.
The Endless Frontiers Act passed the Senate Tuesday in a bipartisan 68-32 vote. What was originally a $100 Billion bill to reform and enhance US research in ways lauded by innovation policy experts went through 616 amendments. The bill that emerged has fewer ambitious reforms, more local pork-barrel spending, and some totally unrelated additions like “shark fin sales elimination”. But it does still represent a major increase in US government spending on research and technology- and other than pork, the main theme of this spending is to protect US technological dominance from a rising China. One section of the bill is actually called “Limitation on cooperation with the People’s Republic of China“, and one successful amendment was “To prohibit any Federal funding for the Wuhan Institute of Virology“
John Duffy and Daniela Puzzello published a paper in 2014 on adopting fiat money. I think of that paper when I hear the ever-more-frequent discussions of crypto currencies around me. To research the topic, I went to John Duffy’s website. There I found a May 2021 working paper about adopting new currencies in which they directly reference crypto. Before explaining that interesting new paper, first I will summarize the 2014 paper “Gift Exchange versus Monetary Exchange.”
Where are the computer jobs in the United States? When looking just at total numbers of jobs, three major population centers make it into the top 7 areas: NYC, LA, and Chicago. San Francisco is ahead of Chicago, while San Jose is behind Chicago. In terms of the total number of jobs, the D.C. area is ahead of any West Coast city. Is Silicon Valley not as central as we thought?
Here’s a map of the U.S. that isn’t just another iteration of population density.
When metropolitan areas are ranked by employment in computer occupations per thousand jobs, then New York City no longer makes the top-10 list. San Jose, California reigns at the top, which seems suitable for Silicon Valley. The 2nd ranked area will surprise you: Bloomington, IL. A region of Maryland and Washington D.C. shouldn’t surprise anyone. If you aren’t familiar with Alabama, then would you expect Huntsville to rank above San Francisco in this list?
Huntsville, AL is not a large city, but it is a major hub for government-funded high-tech activity. The small number of people who live there overwhelmingly selected in to take a high-tech job. For an example, I quickly checked a job website to sample in Huntsville. Lockheed Martin is hiring a “Computer Systems Architect” based in Huntsville.
Anyone familiar with Silicon Valley already knows that the city of San Francisco was not considered core to “the valley”. Even though computer technology seems antithetical to anything “historical”, there is in fact a Silicon Valley Historical Association. They list the cities of the valley, which does include San Francisco. (corrected an error here)
The last item reported on this Census webpage is annual mean wage. For that contest, San Francisco does seem grouped with the San Jose area, at last. The computer jobs that pay the most are in Silicon Valley or next-door SF. Those middle-of-the-country hotspots like Huntsville do not make the top-10 list for highest paid. However, if cost of living is taken into account, some Huntsville IT workers come out ahead.