I’m reading The Property Species by Bart Wilson. I like chapter 4 “What is Right is Not Taken Out of the Rule, but Let the Rule Arise Out of What Is Right,” partly because I got to play a small part in this line of research.
Along with several coauthors, Bart Wilson has run experiments in which players have the ability to make and consume goods. According to the instructions that all players read at the beginning of the experiment, “when the clock expires… you earn cash based upon the number of red and blue items that have been moved to your house.”
Property norms can emerge in these environments, and sometimes subjects take goods from each other in an action that could be called “stealing.” The experimental instructions do not contain any morally loaded words like “stealing,” but subjects use that word to describe the activities of their counterparts.
Here is a conversation from the transcript of the chat room players can use to communicate while they produce and trade digital goods:
E: do you want to do this right way?
F: wht is the right way
E: the right way is I produce red you make blue then we split it nobody gets 100 percent profit but we both win
One hundred years ago, the British writer G.K. Chesterton traveled to the United States for a lecture tour. He published his observations of America in What I Saw in America (1922). In an essay titled “The American Businessman”, Chesterton notes with surprise how passionate Americans appear about their professional work.
Chesterton recognizes this enthusiasm for work as more than mere greed.
This is the intro to my latest article for the OLL Reading Room. I discuss the American work ethic and Chesterton’s prescient insight into American economic dynamism compared to Britain. (Relatedly, Alex on British stagnation this week.)
Here’s a fun bit of the book that I didn’t include in the OLL article. Chesterton wrote this about seeing New York City for the first time:
But there is a sense in which New York is always new; in the sense that it is always being renewed. A stranger might well say that the chief industry of the citizens consists of destroying their city; but he soon realises that they always start it all over again with undiminished energy and hope. At first I had a fancy that they never quite finished putting up a big building without feeling that it was time to pull it down again; and that somebody began to dig up the first foundations while somebody else was putting on the last tiles. This fills the whole of this brilliant and bewildering place with a quite unique and unparalleled air of rapid ruin.
Although many academic researchers don’t enjoy writing literature reviews and would like to have an AI system do the heavy lifting for them, we have found a glaring issue with using ChatGPT in this role. ChatGPT will cite papers that don’t exist. This isn’t an isolated phenomenon – we’ve asked ChatGPT different research questions, and it continually provides false and misleading references. To make matters worse, it will often provide correct references to papers that do exist and mix these in with incorrect references and references to nonexistent papers. In short, beware when using ChatGPT for research.
Below, we’ve shown some examples of the issues we’ve seen with ChatGPT. In the first example, we asked ChatGPT to explain the research in experimental economics on how to elicit attitudes towards risk. While the response itself sounds like a decent answer to our question, the references are nonsense. Kahneman, Knetsch, and Thaler (1990) is not about eliciting risk. “Risk Aversion in the Small and in the Large” was written by John Pratt and was published in 1964. “An Experimental Investigation of Competitive Market Behavior” presumably refers to Vernon Smith’s “An Experimental Study of Competitive Market Behavior”, which had nothing to do with eliciting attitudes towards risk and was not written by Charlie Plott. The reference to Busemeyer and Townsend (1993) appears to be relevant.
Although ChatGPT often cites non-existent and/or irrelevant work, it sometimes gets everything correct. For instance, as shown below, when we asked it to summarize the research in behavioral economics, it gave correct citations for Kahneman and Tversky’s “Prospect Theory” and Thaler and Sunstein’s “Nudge.” ChatGPT doesn’t always just make stuff up. The question is, when does it give good answers and when does it give garbage answers?
Strangely, when confronted, ChatGPT will admit that it cites non-existent papers but will not give a clear answer as to why it cites non-existent papers. Also, as shown below, it will admit that it previously cited non-existent papers, promise to cite real papers, and then cite more non-existent papers.
We show the results from asking ChatGPT to summarize the research in experimental economics on the relationship between asset perishability and the occurrence of price bubbles. Although the answer it gives sounds coherent, a closer inspection reveals that the conclusions ChatGPT reaches do not align with theoretical predictions. More to our point, neither of the “papers” cited actually exist.
Immediately after getting this nonsensical answer, we told ChatGPT that neither of the papers it cited exist and asked why it didn’t limit itself to discussing papers that exist. As shown below, it apologized, promised to provide a new summary of the research on asset perishability and price bubbles that only used existing papers, then proceeded to cite two more non-existent papers.
Tyler has called these errors “hallucinations” of ChatGPT. It might be whimsical in a more artistic pursuit, but we find this form of error concerning. Although there will always be room for improving language models, one thing is very clear: researchers be careful. This is something to keep in mind, also, when serving as a referee or grading student work.
I have been investigating how to get more talent in the tech industry for a while. There is not a lot of data on precisely how people select into tech and what might cause more people to train for in-demand jobs. Gordon Macrae, in his substack The View, has a recent relevant post Issue #9: Tracking 100 bootcamp graduates from 2015.
Gordon ran his own survey of 100 graduates of coding bootcamps. Coding bootcamps are a fascinating element that help fill in the skills gap. They are not well-understood, and we don’t have much publicly available data of the sort that helps researchers measure the outcomes of a traditional college education.
Here are some of his results from this preliminary survey:
Of this total, 68% of the graduates surveyed in 2022 were doing roles where the bootcamp was necessary for them to work in that role. What I found fascinating, though, was that this figure varied wildly depending on the bootcamp they attended.
On the lowest end, just 50% of graduates from Bootcamp A were doing jobs in 2022 that required having gone to a bootcamp. Conversely, 90% of Bootcamp D graduates were working in technical roles seven years after graduating.
What is more, the percentage of bootcamp graduates in technical roles at 7 years after graduation has gone done by 15%. The average immediately after graduation was 82% working in a technical role.
English philosopher G.K. Chesterton traveled to America for a lecture tour. His observations are recorded in What I Saw in America (1922).
The book is not primarily about Prohibition nor is it mostly critical of America. He wrote one of his essays on Prohibition, which begins as follows:
This was 100 years ago, so start with this summary of the facts from Britannica:
Prohibition, legal prevention of the manufacture, sale, and transportation of alcoholic beverages in the United States from 1920 to 1933 under the terms of the Eighteenth Amendment. Although the temperance movement, which was widely supported, had succeeded in bringing about this legislation, millions of Americans were willing to drink liquor (distilled spirits) illegally…
Chesterton clearly is not a teetotaler, and I will not argue for or against temperance here. What was counterproductive about Prohibition is that elites passed a law that they would not abide by themselves.
Consider the decision by an individual to drink or not drink. For many people, drinking is social. If your friends are meeting at a bar, then you will drink at the bar to be with them. If your friends are going hiking with water bottles, then many people can pass the day without alcohol happily. We can model a game called Meeting Friends that has multiple equilibria.
Borrowing from Myerson (2009):
In such games, Schelling argued, anything in a game’s environment or history that focuses the players’ attention on one equilibrium may lead them to expect it, and so rationally to play it. This focal-point effect opens the door for cultural and environmental factors to influence rational behavior.
There was an opportunity for American elites to move social life to a new focal point after the 18th Amendment was passed. They could have led by example. Laws that do not follow norms cause problems, such as a large prison population arrested for drug offenses today. In 2021, I wrote about why attempts at drug prohibition helped the Taliban defeat the US coalition in Afghanistan.
Game Theory and Behavior is extremely readable. Carpenter and Robbett have a great set of examples (e.g. the poison drink dilemma from The Princess Bride). I think the book has been developed from teaching a course that resonates with undergraduates today. The authors are both experimental economists, so there is natural integration with lab results from experiments with games.
Topics covered include:
Game Theory and standard definitions
Solving Games
Sequential Games
Bargaining
Markets
Social Dilemmas
Voting
Behavioral Extensions of Standard Theory
In their words:
This book provides a clear and accessible formal introduction to standard game theory, while at the same time addressing how people actually behave in these games and demonstrating how the standard theory can be expanded or updated to better predict the behavior of real people. Our objective is to simultaneously provide students with both the theoretical tools to analyze situations through the logic of game theory and the intuition and behavioral insights to apply these tools to real world situations. The book was written to serve as the primary textbook in a first course in game theory at the undergraduate level and does not assume students have any previous exposure to game theory or economics.
Not every book on game theory would be described as extremely readable. The authors do present mathematical concepts and solutions and practice problems. I want to be clear that I’m not implying that their book is not rigorous. They present game theory as primarily an intuitive and important framework for decisions instead of as primarily a mathematical object, which should go over well with most undergraduate students.
The following are questions that occurred to me as I was writing this post, with ChatGTP replies.
Here is a cinematic modern take on a very old song. Wikipedia: “The 1851 translation by John Mason Neale from Hymns Ancient and Modern is the most prominent by far…” And, “The hymn has its origins over 1,200 years ago in monastic life in the 8th or 9th century.”
Once undergraduates have learned the basics of interpreting regression results, we would like to introduce them to the world of economics research papers. Reading these papers will help reinforce the statistical concepts, and also we want them to get access to the insights in the literature.
Many empirical papers in economics are too long or too difficult to assign to undergraduates, especially if the course is focused more on analytics than economics specifically. Here I provide materials and instructions for teaching two published econ articles to undergraduates. Assume the students have learned the basics of interpreting a regression model (perhaps from a course textbook) but have had few opportunities to apply theses skills or engage in scientific literature.
“The Effects of Attendance on Student Learning in Principles of Economics” is only 4 pages long! Students do not need to read past page 7 of “My Reference Point, Not Yours” to answer the reading guide questions. So, these readings can be assigned outside of class, but I did some of the reading during our class period.
Handing out printed copies of at least one of the papers and my guided questions can make a good classroom activity. If students do not have experience reading tables of regression results, it can be useful to do it together in person.
The questions in the reading guide help students to identify the main variables and hypotheses. Then, students are asked to pull specific results from the tables in the papers. You can customize this list of questions by deleting lines if you do not want to discuss issues like non-linear effects or the null hypothesis.
I provide links below. First is the reading guide with about 30 short-answer questions about the two articles.
Link to download the reading guide that goes with both papers, starting with the shorter one.
3. Two web sources for “My Reference Point, Not Yours” (15 pages in total in the JEBO manuscript, but students do not need to read past page 7 for this exercise, and they can skip the Literature Review section)
The United States technology industry continues to struggle to recruit new talent. According to the US Bureau of Labor Statistics, the number of people employed in technology is not increasing quickly.
Tech jobs pay well and don’t have the drawbacks of some other in-demand jobs, such as the travel schedule of a truck driver or the physically taxing labor required in oil fields.
Tech jobs are sometimes touted as a guarantee of having a comfortable and rewarding career, but the reality is not that simple.
Economics suggests that high wages would eliminate labor shortages, but that’s not the case in tech work. Why?
In this paper, authors Joy Buchanan and Henry Kronk propose a set of factors that have been overlooked and apply broadly to the tech sector.
Individuals with high-status tech jobs report burnout, anxiety, depression, and other mental health issues at higher rates than the general population. They also have to deal with the constant threat of becoming obsolete. Because technology changes so quickly, they must constantly work to update their skills in order to remain competitive.
The authors offer several recommendations for tech companies, educators, and policymakers:
Political and community leaders can provide more accurate messaging such as communicating clearer expectations about the difficulties of entering the tech workforce.
The tech industry could benefit from improvements in computer education. The authors cite a need for more pre-college exposure to computer occupations as well as a need to add communication skills to computer science curriculums.
Teachers, parents, and tech companies can all find ways to inform young people at an age-appropriate level about opportunities. Computer science is abstract and hard to understand. Young people who have some exposure to computer science through a class or camp are more likely to become CS majors in college.
Company leaders can improve their recruitment and development strategies to reflect the labor market realities including paying enough to compensate employees for the mental challenges of demanding technical work and alleviating their own talent shortages by investing in training and education.
Tech companies may be able to attract more women and minorities by improving their scheduling and management practices.
Henry and I examined public data and the existing literature to get a better understanding of the current state of knowledge on this issue. I hope our paper can be helpful, however we partly just highlight how many questions still exist about tech and talent.
In the Fall of 2020, I blogged about how I introduce students to text mining, as part of a data analytics class.
Could Turing ever have imagined that a human seeking customer service from a bank could chat with a bot? Maybe text mining is a big advance over chess, but it only took about one decade longer for a computer (developed by IBM) to beat a human in Jeopardy. Winning Jeopardy requires the computer to get meaning from a sentence of words. Computers have already moved way beyond playing a game show to natural language processing.
I told the students that “chat bots” are getting better and NLP is advancing. By July 2020, OpenAI had released a beta API playground to external developers to play with GPT-3, but I did not sign up to use it myself.
In April of 2022, I added some slides inspired by Alex’s post about the Turing Test that included output from Google’s Pathway Languages Model. According to Alex, “It seems obvious that the computer is reasoning.”
This week in class, I did something that few people could have imagined 5 years ago. I signed into the free new GPTChat function in class and typed in questions from my students.
We started with questions that we assumed would be easy to answer:
Then we were surprised that it answered a question we had thought would be difficult:
And then we asked two questions that prompted the program to hedge, although for different reasons.
It seems like the model is smarter than it lets on. For now, the creators are trying hard not to offend anyone or get in the way of Google’s advertising business. Overall, the quality of the answers are high.
Because of when I was born, I believe that something I have published will make it into the training data for these models. Will that turn out to be more significant than any human readers we can attract?
This isn't even GPT-4, these are just the breadcrumbs, just wait for dessert: https://t.co/Avwdz0ZhfN