Publish or Perish: A Hilarious Card Game Based on Academia

I had the opportunity to play an advanced copy of “Publish or Perish,” a new card game that satirizes the world of academia. Created by Max Bai, this game offers a funny take on the often cutthroat world of academic publishing.

Official website for the game: here

My group of eight friends divided into teams to accommodate the game’s six-player limit, which I’d recommend not exceeding. From the moment we started reading the instructions aloud, we were laughing.

The gameplay is engaging. One unexpectedly hilarious rule involves clapping for each other’s achievements. The game’s core revolves around publishing manuscripts, accumulating citations, and navigating the waters of peer review and academic politics.

I was impressed by the calibration of the trivia questions. They struck a great balance – challenging enough that we often couldn’t answer them, yet not so obscure that they felt unreasonable. This aspect added an educational twist to the fun, sparking interesting discussions.

The humor in “Publish or Perish” is spot-on, especially in the details. The manuscript cards had us in stitches, with journal names like “Chronicle of Higher Walls” (a clever play on the real “Chronicle of Higher Education”) and absurd paper titles.

My favorite paper title was “The Great Avocado Toast Crisis: Socioeconomic Impacts of Millennial Breakfast Choices”
Esteemed friend and economist Vincent Geloso liked “The Economics of Building a Death Star”

The two other full-time academics in our group were so impressed that they pre-ordered copies on the spot. While the game is probably most enjoyable with at least one academic in the group, our mixed party – including a government statistician and several non-academics – found it entertaining.  One of my non-academic friends summed it up as follows: “This game brought several people from different backgrounds and areas of expertise together for a thoroughly enjoyable evening.”

“Publish or Perish” manages to be both easy to learn and refreshingly original. I predict it will carve out its own niche with its unique theme and mechanics. Players can engage in academic shenanigans like plagiarism, P-value hacking, and even sabotaging opponents’ work – all in good fun.

Continue reading

Culture Parenting Chatter

I’ve been traveling. Here are some things I noticed (on the internet, not on my travels). (On my travels I learned that rental golf carts are as fun as they look.)

  1. Jennifer Aniston slams JD Vance over ‘childless cat ladies’ comment from resurfaced interview

2. This is a poastmodern election. “Campaigners use the internet medium to dunk on their opponents instead of offer solutions to problems.”

“deeply online left wing instagram women are meeting, for the first time ever, deeply online right wing twitter guys. both have developed intricate, sacred language foreign to the other. both are waging war they thought already won. fyi in case you’re wondering about the meltdown”

I thought that meeting happened months ago with the “bear in the woods” discourse.

3. If it wasn’t so serious, American politics would be too funny for television.

4. This woman who gave up professional dancing and now has 8 kids.

One does wonder if the skills that get a person into Julliard relate to the ability to turn family into an Instagram sensation. Is this Ambitious Parenting?

My day with the trad wife queen and what it taught me” This article about Ballerina Farm reads like the anti-“Hannah’s Children” (reviewed by my former student here)

Hannah Neeleman, the mom at Ballerina Farm, has told her story in what appears to be her own words here: https://ballerinafarm.com/pages/about-us Neeleman says that when she was living in Brazil, she would vacation at, “farms and ranches. A place where you could eat farm fresh cheeses and meats, learn about animals, watch chores being done, etc. We were hooked.” I’m tempted to say that it’s weird to say she was into watching other people do chores. But maybe the word “weird” just has lost all meaning after this week.

Jeremiah Johnson points out that, “It doesn’t matter that their farm isn’t a very productive farm, because the husband’s family founded JetBlue.” My take is that these are rich people who are taking a reality-show approach to their lives like wholesome Kardashians. The Neelemans are into watching people do farm chores. (Yes, they do chores themselves, too, but clearly a large professional staff runs the place.) Good for them. As I said at the beginning, I’m into renting golf carts now.

Sources on AI use of Information

  1. Consent in Crisis: The Rapid Decline of the AI Data Commons

Abstract: General-purpose artificial intelligence (AI) systems are built on massive swathes of public web data, assembled into corpora such as C4, Refined Web, and Dolma. To our knowledge, we conduct the first, large-scale, longitudinal audit of the consent protocols for the web domains underlying AI training corpora. Our audit of 14, 000 web domains provides an expansive view of crawlable web data and how consent preferences to use it are changing over time. We observe a proliferation of AI specific clauses to limit use, acute differences in restrictions on AI developers, as well as general inconsistencies between websites’ expressed intentions in their Terms of Service and their robots.txt. We diagnose these as symptoms of ineffective web protocols, not designed to cope with the widespread re-purposing of the internet for AI. Our longitudinal analyses show that in a single year (2023-2024) there has been a rapid crescendo of data restrictions from web sources, rendering ~5%+ of all tokens in C4, or 28%+ of the most actively maintained, critical sources in C4, fully restricted from use. For Terms of Service crawling restrictions, a full 45% of C4 is now restricted. If respected or enforced, these restrictions are rapidly biasing the diversity, freshness, and scaling laws for general-purpose AI systems. We hope to illustrate the emerging crisis in data consent, foreclosing much of the open web, not only for commercial AI, but non-commercial AI and academic purposes.

AI is taking out of a commons information that was provisioned under a different set of rules and technology. See discussion on Y Combinator 

2. “ChatGPT-maker braces for fight with New York Times and authors on ‘fair use’ of copyrighted works” (AP, January ’24)

3. Partly handy as a collection of references: “HOW GENERATIVE AI TURNS COPYRIGHT UPSIDE DOWN” by a law professor. “While courts are litigating many copyright issues involving generative AI, from who owns AI-generated works to the fair use of training to infringement by AI outputs, the most fundamental changes generative AI will bring to copyright law don’t fit in any of those categories…” 

4. New gated NBER paper by Josh Gans “examines this issue from an economics perspective”

Joy: AI companies have money. Could we be headed toward a world where OpenAI has some paid writers on staff? Replenishing the commons is relatively cheap if done strategically, in relation to the money being raised for AI companies. Jeff Bezos bought the Washington Post. It cost a fraction of his tech fortune (about $250 million). Elon Musk bought Twitter. Sam Altman is rich enough to help keep the NYT churning out articles. Because there are several competing commercial models, however, the owners of LLM products face a commons problem. If Altman pays the NYT to keep operating, then Anthropic gets the benefit, too. Arguably, good writing is already under-provisioned, even aside from LLMs.

See New York City for Cheap

Two years ago, when we still had a preschooler, I wrote “See New York City for Free.” In the spirit of Do Less for Preschool, we did not actually go into the city. We looked at the Manhattan skyline from Liberty State Park in New Jersey (free parking). The park has points of interest. I do not believe my kids would have benefitted from an expensive trip into NYC, in 2022 (which isn’t to say that parents should rule it out if they are primarily going for themselves). Remember that a 4-year-old enjoys poking a bucket of rain water about as much as a trip to Disneyworld. Sticking to the nap schedule is probably better for everyone than doing a forced march through fancy landmarks in any weather, for preschool kids.

Now in 2024, we have graduated to actually going into the city (for now, assume the constraint of spending our nights in New Jersey, you guys). I’ll describe two low-budget day trips that will tire but not exhaust school-age kids.

On the first day, we used NJ Transit trains to get to New York Penn Station. Since my kids do mostly cars and suburbs, the train itself was fun. On weekends and holidays, kids ride free on NJ Transit. From there, we walked all the way to Central Park, which took us through Time Square. You call this “urban hiking” now (previously known as walking). We stopped into a few stores along the way. I’ve taught my kids to “window shop” in a store, meaning they are warned ahead of time that we are not buying anything. We spent money on food and drinks, but it would have been possible to pack in a lot more food if desired. Once we had walked all the way to the upper east side (about 3 miles), we took a taxi back to the train station.

On the second day, we avoided high parking fees once again by departing on the ferry from the New Jersey terminal to see the Statue of Liberty. There were plenty of families with preschool kids or babies, by the way. Strollers are allowed on the ferries, just not inside the pedestal or statue. The ferry ticket includes access to all of the indoor museums and audio tours. If you want to be allowed to walk up the stairs into the crown of the statue, be aware that you need to book those tickets many months in advance. If you just want to take the ferry to the island, then you don’t have to plan so far ahead.

These plans rely heavily on being outside, so rain would pose a problem. There are plenty of places to escape the rain, but it would not be nearly as fun/cheap.

If you are road tripping anywhere with kids, read Zach on long family car trips. I’ll add that you can fill up a large insulated thermos of ice from the hotel and bring it along to provision drinks from cans throughout the day.

Pictured: Central Park, view from Statue of the Statue, view from the Statue of the city

Oster on Haidt and Screens

Emily Oster took on the Jonathan Haidt-related debate in her latest post “Screens & Social Media

Do screens harm mental health? Oster joins some other skeptics I know. She doesn’t fully back Haidt, and she does the economist thing by mentioning “tradeoffs.”

Oster, ever practical, makes a point that sometimes gets lost. Maybe social media doesn’t cause suicide. Maybe there is no causal relationship concerning diagnosed mental health conditions, as indicated by the data. That doesn’t mean that parents and teachers should not monitor and curtail screen time. Oster says that it’s obvious that kids should not have their phones in the classroom during school instruction.

Here’s a personal story from this week. My son wants Roblox. The game says 12+, and I’ve told him that I’m sticking to that. No. He can’t have it now and he can’t start chatting with strangers online. We aren’t going to re-visit the conversation until he’s 12. Is he mad at me? Yes. You know what he does when he’s really bored at home? He starts vacuuming. I’ve driven him to madness, with these boundaries I set, or to vacuuming. (Recall he likes these books. Since hearing Harry Potter 1 as an audiobook in the car, he’s started tearing through the series himself via hardcover book.)

An innocent tablet game I let him play (when he’s allowed to have screen time) is Duck Life. Rated E for everyone.

Previously, I wrote “Video Games: Emily Oster, Are the kids alright?

And more recently, Tyler had “My contentious Conversation with Jonathan Haidt” Maybe Tyler should debate Emily Oster next about limiting phone use.

Meme Generator for Econ Papers

I’m exploring whether the meme generator by Glif could be a way to introduce an econ paper. What if you identify a main character in your research project for GLIF to drag? (BTW, I have learned that the Wojack Meme Generator will re-write the name of the person you put in if your phrase is too long but that does not mean that the phrase is not used for content. So, you can put a longer phrase into the meme generator.)

I’m going to re-print here the prompt I actually used to get the Glif meme. As a warning, this approach is obviously not appropriate for more professional audiences. But sometimes you have a chance to quickly show your paper to a more informal audience either in a presentation or online. Having a way to wake up the audience in that situation could be helpful.

I’m not sharing all of these because I like them. I’m trying to give readers a chance to decide if they’d want to try it themselves. I think some of these prompts don’t work well and the cartoons either aren’t funny or are not true to life. However, I do find them interesting if the assignment is to scrape the internet for the maximally negative sentiment about a certain thing.

The prompt I used: “Pay Transparency Advocate” / “Effort Transparency and Fairness,” with Elif Demiral and Umit Saglam (under review)

Prompt: “Person Who Trusts ChatGPT” / “Do People Trust Humans More Than ChatGPT?” (2024) with William Hickman. Journal of Behavioral and Experimental Economics, 112: 102239. 

Prompt: “Undergraduate Computer Science Major” / “Willingness to be Paid: Who Trains for Tech Jobs?” (2022) Labour Economics, Vol 79, 102267. 

Continue reading

GLIF Social Media Memes

Wojak Meme Generator from Glif will build you a funny meme from a short phrase or single word prompt. Note that it is built to be derogatory, cruel for sport, and may hallucinate up falsehoods. (see tweet announcement)

I am fascinated by this from the angle of modern anthropology. The AI has learned all of this by studying what we write online. Someone can build an AI to make jokes and call out hypocrisy.

Here are GLIFs of the different social media user stereotypes as of 2024. Most of our current readers probably don’t need any captions to these memes, but I’ll provide a bit of sincere explanation to help everyone understand the jokes.

Twitter user: Person who posts short messages and follows others on the microblogging platform.

Facebook user: Individual with a profile on the social network for connecting with friends and sharing content.

Bluesky user: Early adopter of a decentralized social media platform focused on user control.

Continue reading

Is the Universe Legible to Intelligence?

I borrowed the following from the posted transcript. Bold emphasis added by me. This starts at about minute 36 of the podcast “Tyler Cowen – Hayek, Keynes, & Smith on AI, Animal Spirits, Anarchy, & Growth” with Dwarkesh Patel from January 2024.

Patel: We are talking about GPT-5 level models. What do you think will happen with GPT-6, GPT-7? Do you still think of it like having a bunch of RAs (research assistants) or does it seem like a different thing at some point?

Cowen: I’m not sure what those numbers going up mean or what a GPT-7 would look like or how much smarter it could get. I think people make too many assumptions there. It could be the real advantages are integrating it into workflows by things that are not better GPTs at all. And once you get to GPT, say 5.5, I’m not sure you can just turn up the dial on smarts and have it, for example, integrate general relativity and quantum mechanics.

Patel: Why not?

Cowen: I don’t think that’s how intelligence works. And this is a Hayekian point. And some of these problems, there just may be no answer. Like maybe the universe isn’t that legible. And if it’s not that legible, the GPT-11 doesn’t really make sense as a creature or whatever.

Patel (37:43) : Isn’t there a Hayekian argument to be made that, listen, you can have billions of copies of these things. Imagine the sort of decentralized order that could result, the amount of decentralized tacit knowledge that billions of copies talking to each other could have. That in and of itself is an argument to be made about the whole thing as an emergent order will be much more powerful than we’re anticipating.

Cowen: Well, I think it will be highly productive. What tacit knowledge means with AIs, I don’t think we understand yet. Is it by definition all non-tacit or does the fact that how GPT-4 works is not legible to us or even its creators so much? Does that mean it’s possessing of tacit knowledge or is it not knowledge? None of those categories are well thought out …

It might be significant that LLMs are no longer legible to their human creators. More significantly, the universe might not be legible to intelligence, at least of the kind that is trained on human writing. I (Joy) gathered a few more notes for myself.

A co-EV-winner has commented on this at Don’t Worry About the Vase

(37:00) Tyler expresses skepticism that GPT-N can scale up its intelligence that far, that beyond 5.5 maybe integration with other systems matters more, and says ‘maybe the universe is not that legible.’ I essentially read this as Tyler engaging in superintelligence denialism, consistent with his idea that humans with very high intelligence are themselves overrated, and saying that there is no meaningful sense in which intelligence can much exceed generally smart human level other than perhaps literal clock speed.

I (Joy) took it more literally. I don’t see “superintelligence denialism.” I took it to mean that the universe is not legible to our brand of intelligence.

There is one other comment I found in response to a short clip posted by @DwarkeshPatel  by youtuber @trucid2

Intelligence isn’t sufficient to solve this problem, but isn’t for the reason he stated. We know that GR and QM are inconsistent–it’s in the math. But the universe has no trouble deciding how to behave. It is consistent. That means a consistent theory that combines both is possible. The reason intelligence alone isn’t enough is that we’re missing data. There may be an infinite number of ways to combine QM and GR. Which is the correct one? You need data for that.

I saved myself a little time by writing the following with ChatGPT. If the GPT got something wrong in here, I’m not qualified to notice:

Newtonian physics gave an impression of a predictable, clockwork universe, leading many to believe that deeper exploration with more powerful microscopes would reveal even greater predictability. Contrary to this expectation, the advent of quantum mechanics revealed a bizarre, unpredictable micro-world. The more we learned, the stranger and less intuitive the universe became. This shift highlighted the limits of classical physics and the necessity of new theories to explain the fundamental nature of reality.
General Relativity (GR) and Quantum Mechanics (QM) are inconsistent because they describe the universe in fundamentally different ways and are based on different underlying principles. GR, formulated by Einstein, describes gravity as the curvature of spacetime caused by mass and energy, providing a deterministic framework for understanding large-scale phenomena like the motion of planets and the structure of galaxies. In contrast, QM governs the behavior of particles at the smallest scales, where probabilities and wave-particle duality dominate, and uncertainty is intrinsic.

The inconsistencies arise because:

  1. Mathematical Frameworks: GR is a classical field theory expressed through smooth, continuous spacetime, while QM relies on discrete probabilities and quantized fields. Integrating the continuous nature of GR with the discrete, probabilistic framework of QM has proven mathematically challenging.
  2. Singularities and Infinities: When applied to extreme conditions like black holes or the Big Bang, GR predicts singularities where physical quantities become infinite, which QM cannot handle. Conversely, when trying to apply quantum principles to gravity, the calculations often lead to non-renormalizable infinities, meaning they cannot be easily tamed or made sense of.
  3. Scales and Forces: GR works exceptionally well on macroscopic scales and with strong gravitational fields, while QM accurately describes subatomic scales and the other three fundamental forces (electromagnetic, weak nuclear, and strong nuclear). Merging these scales and forces into a coherent theory that works universally remains an unresolved problem.

Ultimately, the inconsistency suggests that a more fundamental theory, potentially a theory of quantum gravity like string theory or loop quantum gravity, is needed to reconcile the two frameworks.

P.S. I published “AI Doesn’t Mimic God’s Intelligence” at The Gospel Coalition. For now, at least, there is some higher plane of knowledge that we humans are not on. Will AI get there? Take us there? We don’t know.

Do I Trust Claude 3.5 Sonnet?

For the first time this week, I paid for a subscription to an LLM. I know economists who have been on the paid tier of OpenAI’s ChatGPT since 2023, using it for both research and teaching tasks.

I did publish a paper on the mistakes it makes: ChatGPT Hallucinates Nonexistent Citations: Evidence from Economics In a behavioral paper, I used it as a stand-in for AI: Do People Trust Humans More Than ChatGPT?

I have nothing against ChatGPT. For various reasons, I never paid for it, even though I used it occasionally for routine work or for writing drafts. Perhaps if I were on the paid tier of something else already, I would have resisted paying for Claude.  

Yesterday, I made an account with Claude to try it out for free. Claude and I started working together on a paper I’m revising. Claude was doing excellent work and then I ran out of free credits. I want to finish the revision this week, so I decided to start paying $20/month.

Here’s a little snapshot of our conversation. Claude is writing R code which I run in RStudio to update graphs in my paper.

This coding work is something I used to do myself (with internet searches for help). Have I been 10x-ed? Maybe I’ve been 2x-ed.

I’ll refer to Zuckerberg via Dwarkesh (which I’ve blogged about before):

Continue reading

Real and Nominal Rigidities Research

This week, I’m doing some review for a macro-related project. In economics, the concepts of real and nominal rigidities help explain why prices and wages do not always adjust quickly in response to shocks. These rigidities create frictions that affect how markets function. A well-known rigidity is downward nominal wage rigidity (I have an experimental paper on that).

“Nominal rigidities” refer to the stickiness of prices and wages in their nominal (monetary) terms. These rigidities prevent immediate adjustment of prices and wages to changes in the overall economic environment.

Examples of Nominal Rigidities

  • Menu Costs: The costs associated with changing prices, such as reprinting menus or reprogramming point-of-sale systems. For instance, a restaurant might avoid changing its menu prices frequently because of the costs involved in printing new menus and the risk of confusing or losing customers.
  • Nominal Wage Contracts: Many workers are employed under contracts that fix their wages for a certain period, such as a year. This means that even if the demand for labor changes, wages cannot adjust immediately. For example, a factory might have a one-year wage contract with its workers, preventing it from lowering wages even during a downturn.
  • Price Stickiness Due to Psychological Factors: Prices may remain rigid because businesses fear that frequent changes might upset customers or erode their trust. A classic example is a retail store keeping prices stable to maintain a reputation for reliability, even when costs fluctuate.

Side note: Lars Christensen predicts less nominal rigidity in our future. Menu costs are getting smaller and customers could become accustomed to, for example, watching the price of milk fluctuate in real time in response to statements by the Fed. Click here for related Twitter joke.

Continue reading