A Wartime Natural Experiment About Copyright

One of the hardest questions in copyright policy is: “What would have happened otherwise?” When Disney lobbies for longer copyright terms or academic publishers defend high subscription fees, we struggle to evaluate their claims because we can’t observe the counterfactual. What would happen to creativity and innovation if we shortened copyright terms or lowered prices?

This is what makes Biasi and Moser’s 2021 study in the American Economic Journal: Microeconomics valuable. They examine a rare “natural experiment” from World War II – the Book Republication Program (BRP) – which provides insights into how copyright affects the spread and use of knowledge.

In 1942, the U.S. government allowed American publishers to reprint German scientific books without seeking permission from German copyright holders (though royalties were still paid to the U.S. government). This created a test case: German books suddenly became cheaper, while similar Swiss scientific books (Switzerland being neutral in the war) maintained their original copyright protection and prices.

This setup lets us answer the counterfactual question. What happens when you maintain basic royalty payments but prevent monopoly pricing? The researchers compared the same book before and after the policy change, German books versus Swiss books, areas near libraries with these books versus those without, and usage by English-speaking scientists versus others. Such comprehensive comparison groups are rarely available in copyright research.

The authors report that when book prices fell by 10%, new research citing these books increased by 40%. The benefits spread beyond elite institutions, with new research clusters emerging wherever scientists gained access to these books. This does not appear to just be shifting citations from one source to another – there was genuine new knowledge creation, evidenced by increased patents and PhD production.

Such clean natural experiments in copyright policy are rare (there are a few laboratory experiments). Most changes come from lobbying (like the “Mickey Mouse Protection Act”) or technological disruption (like music streaming), making it hard to isolate the effects of copyright itself. The BRP provides uniquely clear evidence that moderate copyright protection – rather than maximum protection – might better serve innovation.

As we debate copyright terms and academic paywalls today, this historical accident of war gives us something valuable: empirical evidence about what happens when you find a middle ground between total copyright protection and unrestricted access.

Biasi, Barbara and Petra Moser. 2021. “Effects of Copyrights on Science: Evidence from the WWII Book Republication Program.” American Economic Journal: Microeconomics, 13 (4): 218–60.

Effort Transparency and Fairness Published at Public Choice

Please see my latest paper, out at Public Choice: Effort transparency and fairness

The published version is better, but you can find our old working paper at SSRN “Effort Transparency and Fairness

Abstract: We study how transparent information about effort impacts the allocation of earnings in a dictator game experiment. We manipulate information about the respective contributions to a joint endowment that a dictator can keep or share with a counterpart…

Employees within an organization are sensitive to whether they are being treated fairly. Greater organizational fairness is shown to improve job satisfaction, reduce employee turnover, and boost the organization’s reputation. To study how transparent information impacts fairness perceptions, we conduct a dictator game with a jointly earned endowment. 

The endowment is earned by completing a real effort task in the experiment, an analog to the labor employees contribute to employers. First, two players work independently to create a pool of money. Then, the subject assigned the role of the “dictator” allocates the final earnings between them.

In the transparent treatment, both dictators and recipients have access to complete information about their own effort levels and contributions, as well as those of their counterparts. In the non-transparent treatment, dictators have full information about the relative contributions of both players, but recipients do not know how much each person contributed to the endowment. The two treatments allow us to compare the behaviors of dictators who know they could be judged and held to reciprocity norms with dictators who do not face the same level of scrutiny.

*drumroll* results:

This graph shows the amount of money the dictators take from the recipient contribution, in cents.  There are two ways to look at this. Notice the spike next to zero. Most dictators do not take much from what their counterpart earned. They are *dictators*, meaning they could take everything. Most take almost nothing, regardless of the treatment. We interpret this to mean that they are acting out of a sense of fairness, and we apply a humanomics framework to explain this in the paper.

Also, there is significantly more taken in non-transparency. When the worker does not have good information on the meritocratic outcome, then some dictators feel like they can get away with taking more. Some of this happens through what we call “shading down” of the amount sent by the dictator under the cover of non-transparency.

There is more in the paper, but the last thing I’ll point out here is that the “worker” subjects (recipients) anticipate that this will happen. The recipients forecast that the dictator would take more under non-transparency. In our conclusion, we mention that, even though the dictator seems to be at an advantage in a non-transparent environment, the dictator still might choose a transparency policy if it affects which workers select into the team.

View and download your article*   This hyperlink is good for a limited number of free downloads of my paper with Demiral and Saglam, says Springer the publisher. Please don’t waste it, but if you want the article I might as well put it out there. I posted this on 11/2/2024, so there is no guarantee that the link will work for you.

Cite our article: Buchanan, J., Demiral, E.E. & Sağlam, Ü. Effort transparency and fairness. Public Choice (2024). https://doi.org/10.1007/s11127-024-01230-9

Sticky Prices as Coordination Failure Working Paper

Sticky Prices as Coordination Failure: An Experimental Investigation” is my new paper with David Munro of Middlebury, up at SSRN.

We ask whether coordination failures are a source of nominal rigidities. This was suggested in a recent speech by ECB President Christine Lagarde. She said, “In the recent decades of low inflation, firms that faced relative price increases often feared to raise prices and lose market share. But this changed during the pandemic as firms faced large, common shocks, which acted as an implicit coordination mechanism vis-à-vis their competitors.”

Coordination failure was suggested as a possible cause of price rigidity in a theory paper by Ball and Romer (1991). They demonstrated the possibility for multiple equilibria, and we perform the first laboratory test to observe equilibrium selection in this environment.

We theoretically solve a monopolistically competitive pricing game and show that a range of multiple equilibria emerges when there are price adjustment costs (menu costs). We explore equilibrium selection in laboratory price setting games with two treatments: one without menu costs where price adjustment is always an equilibrium, and one with menu costs where both rigidity and flexibility are possible equilibria.

In plain language, for our general audience, the idea is that the prices you set might depend on what other people are doing. If other people are responding to a shock (for example, Covid driving up labor costs all over town might cause retail prices to rise) then you will, too. If every other store in town is afraid to raise prices, then there is a certain situation where you might resist adjusting your prices, too (price rigidity).

Results: First, when there is only one theoretical equilibrium, subjects usually conform to it. When cost shocks are large, price adjustment is a unique equilibrium regardless of the presence of menu costs, and we see that subjects almost always adjust prices. When cost shocks are small and there are menu costs rigidity is a unique equilibrium and subjects almost never adjust. Conversely, with small cost shocks subjects almost always adjust when there are no menu costs.

The more interesting cases are when the parameters allow for either rigidity or flexibility to be selected. We find that groups do not settle at the rigidity equilibrium. Rather, depending on the specific nature of the shock, between half and 80% of subjects adjust in response to a shock. The intermediate levels of adjustment are represented here in this figure as the red circles that fall between the red and green bands where multiple equilibria are possible.

In the figure above, the red circles are higher when the production cost shock gets further from zero in absolute value. We see that the proportion of subjects adjusting prices is proportional to the size of the cost shocks. This is consistent with the interpretation that the large post-COVID cost shocks acted as an implicit coordination mechanism for firms raising prices. Our results provide a number of interesting insights on nominal rigidities. We document more nuance in the paper regarding heterogeneity and asymmetry. Comments and feedback are appreciated! If it’s not clear from the EWED blog how to email me (Joy), find my professional contact info here. 

Joy on The Inductive Economy podcast

I got to be a guest of Vignesh Swaminathan who is based in Mumbai. It’s fun to have a deep conversation with someone on the other side of the world and share it with the whole internet (and the AI’s).

Apple podcast link: https://podcasts.apple.com/us/podcast/dr-joy-buchanan-on-understanding-economics-through/id1719744197?i=1000652541934

Blogpost with links and timestamps: https://www.inductive.in/p/dr-joy-buchanan-on-understanding

The first 10 minutes are about Tyler’s GOAT book. Vignesh asked me to name some influential economists who did not make Tyler’s list.

Around minute 12 we talk about the experimental economics methodology.

The middle (minute 15-42) is a discussion of the pipeline into tech and my Willingness to be Paid paper. He adds his perspective on tech jobs in India.

Around minute 42, Vignesh makes a switch over to the Barbie movie and then Oppenheimer. He observes that Oppenheimer is a “brand.” I speculate on careers in Barbieland. We recorded this before Christmas of ’23, right after everyone had seen these summer movies. Both movies ended up in the 2024 Oscars awards ceremony.

I predicted that people will eventually be able to create a custom movie from a verbal prompt, because of the AI content revolution. Here in Spring of ’24 that has already come true. Sora is shocking everyone and even caused Tyler Perry to halt a physical film studio expansion.

Around minute 55, we pivot to Hayek and competition, which leads to a postmortem on Google Plus (RIP).

1:05-1:16 features intellectual property and my IP experiment with Bart Wilson

Ended with rapid-fire and personal questions.

Skimming back through this conversation has me thinking about tech work. The market for IT workers and programmers has evolved since I first started the project that became “Willingness to be Paid: Who Trains for Tech Jobs?”

I like pointing people all the way back to this report on jobs from 1958. Learn to Code has been good advice for a long time, for the people who can tolerate the work. That does not mean it will be true forever, but I would argue that it is still true today.

Silicon Valley as a career might have peaked around 2021. It’s not going away, but it might not be growing anymore in terms of the number of talented people who can be absorbed there. (Might I suggest Huntsville instead?)

The WSJ recently ran a story “Tech Job Seekers Without AI Skills Face a New Reality: Lower Salaries and Fewer Roles”

The rise of artificial intelligence is affecting job seekers in tech who, accustomed to high paychecks and robust demand for their skills, are facing a new reality: Learn AI and don’t expect the same pay packages you were getting a few years ago.

Jobs in areas like telecommunications, corporate systems management and entry-level IT have declined in recent months, while roles in cybersecurity, AI and data science continue to rise, according to Janco’s data. The average total compensation for IT workers is about $100,000, making the position a target for continued cost-cutting.

One reason tech jobs are less attractive than some other professional paths is that the skillset changes. We mentioned this as a drawback in our policy paper. Computers are constantly changing. Vignesh and I discuss the issue of risk. I suggested that companies could pay less for talent if they were willing to offer packages that carry less risk of getting fired.

Nevertheless, tech still has decent job prospects. An unemployment rate of about 5% is about normal for work, even though tech had seen lower rates at the peak of demand. I do not know what programming as a career will look like in 10 years, but I’d say the same about screenwriting and live sports commentary. The LLMs are coming for everything or nothing or something in between.

I’ve been on tour (regionally) with our ChatGPT paper and getting opportunities to query different audiences about their LLM use. Last week I talked to a young man in our business school who is using ChatGPT to write SQL code at his job. I said in the podcast that I would still advise young people in Alabama to learn to code, even if they are not going to move to Silicon Valley. I think coding is more fun in the LLM-age or at least less miserable.

Do People Trust ChatGPT Writing?

My new working paper with Will Hickman is up on SSRN: Do People Trust Humans More Than ChatGPT?

We study whether people will pay for a fact-check on AI writing. ChatGPT can be very useful, but human readers should not trust every fact that it reports. Yesterday’s post was about ChatGPT writing false things that look real.

The reason participants in our experiment might pay for a fact-check is that they earn bonus payments based on whether they correctly identify errors in a paragraph. If participants believe that the paragraph does not contain any errors, they should not pay for a fact-check. However, if they have doubts, it is rational to pay for a fact-check and earn a smaller bonus, for certain.

Abstract: We explore whether people trust the accuracy of statements produced by large language models (LLMs) versus those written by humans. While LLMs have showcased impressive capabilities in generating text, concerns have been raised regarding the potential for misinformation, bias, or false responses. In this experiment, participants rate the accuracy of statements under different information conditions. Participants who are not explicitly informed of authorship tend to trust statements they believe are human-written more than those attributed to ChatGPT. However, when informed about authorship, participants show equal skepticism towards both human and AI writers. There is an increase in the rate of costly fact-checking by participants who are explicitly informed. These outcomes suggest that trust in AI-generated content is context-dependent.

Our original hypothesis was that people would be more trusting of human writers. That turned out to be only partially true. Participants who are not explicitly informed of authorship tend to trust statements they believe are human-written more than those attributed to ChatGPT.

We presented information to participants in different ways. Sometimes we explicitly told them about authorship (informed treatment) and sometimes we asked them to guess about authorship (uninformed treatment).

This graph (figure 5 in our paper) shows that the overall rate of fact-checking increased when subjects were given more explicit information. Something about being told that a paragraph was written by a human might have aroused suspicion in our participants. (The kids today would say it is “sus.”) They became less confident in their own ability to rate accuracy and therefore more willing to pay for a fact-check. This effect is independent of whether participants trust humans more than AI.

We are thinking of fact-checking as often a good thing, in the context of our previous work on ChatGPT hallucinations. So, one policy implication is that certain types of labels can cause readers to think critically. For example, Twitter labels automated accounts so that readers know when content has been chosen or created by a bot.

Our working paper is currently trending on SSRN top ten lists such as this one.

Suggested Citation:
Buchanan, Joy and Hickman, William, Do People Trust Humans More Than ChatGPT? (November 16, 2023). GMU Working Paper in Economics No. 23-38, Available at SSRN: https://ssrn.com/abstract=4635674

Prohibition Reversals

We have all heard of the prohibition era. Popularly, it refers to the period from 1920-1933 during which it was illegal to sell, transport, and import alcohol in the US. National prohibition was enacted by the 18th amendment and repealed by the 21st amendment. That’s the basic picture.

But did you know that there were state alcohol prohibitions prior to the national one? In fact, there were 3 major waves of state alcohol prohibitions. The first was in the 1850s, the 2nd was in the 1880s, and then the 3rd preceded the 18th amendment. The image below illustrates the number of states that had statewide dry policies. You can see the first two waves and then the tsunami just prior to 1920.

Continue reading

Video of Joy Buchanan on Tech Jobs and Who Will Program

Here are some show notes for a keynote lecture to a general audience in Indiana. This was recorded in April 2023.

Minute Topic
2:00“SMET” vs STEM Education – Does Messaging Matter?  
(Previous blog post on SMET)
5:00Is Computer Programming a “Dirty Job”? Air conditioning, compensating differentials, and the nap pods of Silicon Valley  
(post on the 1958 BLS report)
7:50Wages and employment outlook for computer occupations
10:00Presenting my experimental research paper “Willingness to be Paid: Who Trains for Tech Jobs?” in 23 minutes  

Motivation and Background 10:00 – 15:30
Experimental Design         15:30 – 22:00
Results                    22:00 – 30:00
Discussion                 30:00 – 33:30
33:50Drawbacks to tech jobs  

See also my policy paper published by the CGO on tech jobs and employee satisfaction
35:30The 2022 wave of layoffs in Big Tech and vibing TikTok Product Managers  

I borrowed a graph on Tech-cession from Joey Politano and a blog point from Matt Yglesias, and of course reference the BLS.
39:00Should You Learn to Code? (and the new implications of ChatGPT)  

Ethan Mollick brought this Nature article to my attention. 
Tweet credits to @karpathy and @emollick
48:00Q&A with audience

Video: Joy Presents Two Experimental Papers to a Macro Class

Here are some show notes to a talk I gave in April 2023. I had the opportunity to talk to an undergraduate macroeconomics class at Indiana University East.

Minute Topic
2:00Research on Behavioral economics and Macroeconomics
4:25Labor Market Equilibrium Concepts and Incomplete Labor Contracts
6:50The Gift Exchange Game and the Fair Wage-Effort Theory
13:00Recessions and Downward Wage Rigidity
19:00Presenting my Experimental Study “If Wages Fell During a Recession” in 13 minutes
32:00-33:00How question raised in “If Wages Fell During a Recession” pointed the way to the Reference Point paper
33:00 – 41:00Presenting my Experimental Study “My Reference Point, Not Yours” in 8 minutes
41:00-44:00Conclusion of “My Reference Point, Not Yours” and tying it back to macroeconomics

The “If Wages Fell…” paper directly inspired the “My Reference…” experiment. But I don’t cite “If Wages Fell…” in “My Reference…,” so you would never know how closely they are connected unless you listen to this talk.

New Double Auction Paper

This weekend I am at the Economic Science Association meeting.

Most of the economists in this group use experiments as part of their empirical research. In this post I will highlight some recently published work that is in the tradition of Vernon Smith, who influenced all of us so much.

Martinelli, C., Wang, J. & Zheng, W. Competition with indivisibilities and few traders. Experimental Economics (2022). https://doi.org/10.1007/s10683-022-09772-9

Abstract: We study minimal conditions for competitive behavior with few agents. We adapt a price-quantity strategic market game to the indivisible commodity environment commonly used in double auction experiments, and show that all Nash equilibrium outcomes with active trading are competitive if and only if there are at least two buyers and two sellers willing to trade at every competitive price. Unlike previous formulations, this condition can be verified directly by checking the set of competitive equilibria. In laboratory experiments, the condition we provide turns out to be enough to induce competitive results, and the Nash equilibrium appears to be a good approximation for market outcomes. Subjects, although possessing limited information, are able to act as if complete information were available in the market.

This small excerpt from their results shows a market converging toward equilibrium over time, under different treatment conditions. With some opportunities for practice and feedback, agents create surplus value by trading.

Figure 4 plots the average efficiency in each round in the four treatments. Efficiency is defined as the percentage of the maximum social surplus realized. … learning takes longer under the clearing house institution; hence, average efficiency under the clearing house institution presents a stronger upward trend over time. Under the clearing house institution, the average efficiencies start at levels lower than under the double auction institution, and remain statistically lower in the second half of the experiment. Nevertheless, we can observe from Fig. 4 that the upward trend of the efficiencies in clearing house treatments persist over time, and at the end of the experiment, the efficiency levels from the two institutions are close.

An intervention for children to change perceptions of STEM

Here is a a new paper related to the topic of women getting into technical fields (see previous post on my paper about programming).

Grosch, Kerstin, Simone Haeckl, and Martin G. Kocher. “Closing the gender STEM gap-A large-scale randomized-controlled trial in elementary schools.” (2022).

These authors were thinking about the same problem at the same time, unbeknownst to me. In their introduction they write, “We currently know surprisingly little about why women still remain underrepresented in STEM fields and which interventions might work to close the gender STEM gap.”

My conclusion from my paper is that, by college age, subjective attitudes toward tech are very important. This leads to the questions of whether those subjective attitudes are shaped at younger ages. Grosch et al. have run an experiment to target 3rd-graders with a STEM-themed game. I’ll quote their description:

The treatment web application (treatment app) intends to increase interest in STEM directly by increasing knowledge and awareness about STEM professions and indirectly by addressing the underlying behavioral mechanisms that could interfere with the development of interest in STEM. The treatment app presents both fictitious and real STEM professionals, such as engineers and programmers, on fantasy planets. Accompanied by the professionals, the children playfully learn more about various societal challenges, such as threats from climate change and to public health, and how STEM skills can contribute to combating them. The storyline of the app comprises exercises, videos, and texts. The app also informs children about STEM-related content in general. To address the behavioral mechanisms, the app uses tutorials, exercises, and (non-monetary) rewards that teach children a growth mindset and improve their self-confidence and competitive aptitude. Moreover, the app introduces female STEM role models to overcome stereotypical beliefs. To test the app’s effect, we recruited 39 elementary schools in Vienna (an urban area) and Upper Austria (a predominantly rural area).

This is a preview of their results, although I recommend reading their paper to understand how these measurements were made:

Girls’ STEM confidence increases significantly in the treatment group (difference: 0.047 points or 0.28 standard deviations, p = 0.002, Wald test), and the effect for girls is significantly larger than the effect for boys.

Result 2: Children’s competitiveness is positively associated with children’s interest in STEM. We do not find evidence that stereotypical thinking and a growth mindset is associated with STEM interest.

Lastly, my kids play STEM-themed tablet games. PBS Kids has a great suite of games that are free and educational. Unfortunately, I have not tried to treat one kid while giving the other kid a placebo app, so my ability to do causal inference is limited.