The End of Boredom: How AI Companions Might Reduce Random Violence

Joy writes: I read Co-Intelligence by Ethan Mollick (thanks to Samford for the free book). Most of it is old news for those of us who follow Ethan on social media and use ChatGPT. However, something that stood out to me was his mention of a study in which humans decide to give themselves a painful shock rather than sit alone in silence for 15 minutes.

Claude comments further based on my prompt:

The End of Boredom: How AI Companions Might Reduce Random Violence

Remember that study where people would rather shock themselves than sit alone with their thoughts? Ethan Mollick references it in Co-Intelligence, and it reveals something unsettling: 67% of men and 25% of women chose electric shocks over sitting quietly for just 15 minutes.

Here’s a strange thought—what if our AI-saturated future accidentally reduces certain types of violence simply by eliminating boredom?

The Violence-Boredom Connection

“Idle hands are the devil’s workshop” exists for a reason. Research has long linked boredom to risk-taking, substance abuse, and impulsive violence—the opportunistic kind that emerges from restlessness rather than planning. Young men starting fights on street corners, vandalism on summer nights, the restless energy that sometimes turns destructive—much of this stems from the unbearable weight of having nothing to do.

Enter Infinite Engagement

We’re rapidly approaching a world where boredom might become extinct. AI companions are becoming always available, infinitely patient, endlessly novel, and perfectly tailored to individual interests. Your future AI won’t just answer questions—it will debate philosophy at 3 AM, create personalized games, generate stories with you as the protagonist, or help explore any curiosity rabbit hole.

The cognitive void that once led people to shock themselves rather than think? It might simply cease to exist.

The Unexpected Benefits

Consider the implications: Young men who might have started fights out of restlessness could instead be deeply engaged in AI-assisted music production or coding projects. The same restless energy that manifests destructively could be channeled into creative collaboration.

AI companions could help process frustration before it builds to a breaking point—imagine having an infinitely patient listener during those dangerous 2 AM spirals of rage. While not replacing human connection, AI interaction might buffer the worst effects of isolation that can lead to radicalization.

The Dark Side

This isn’t utopian. An always-engaged society raises concerns: What happens to human relationships when AI companionship is always easier? Does constant stimulation atrophy our ability to self-reflect? Might we lose the creative insights that emerge from boredom?

Crucially, this would only address impulsive, boredom-driven violence. Systemic violence, ideological extremism, and deeper social problems won’t disappear because people have engaging AI companions.

A Strange New World

For the first time in history, boredom—that uncomfortable void that has driven both creativity and destruction—might become optional. The same species that chose shocks over silence is about to get exactly what it wanted: constant, meaningful stimulation.

Whether this leads to a more peaceful society or new problems we haven’t imagined remains to be seen. The question is whether we’ll recognize what we’ve lost only after it’s gone.

The experiment is already underway—we’re all participating in it.

Joy comments at the end: The AI-written essay is willing to explore downsides of AI engagement.

Joy on the Anthropic Copyright Settlement

I’m at Econlog this week with:

The Anthropic Settlement: A $1.5 Billion Precedent for AI and Copyright

There are two main questions. Will AI companies need to pay compensation to authors they are currently training off of? Secondly, how important is it for human writing to be a paying career in the future, if AI continues to need good new material to train from?

There is more at the link but here are some quotes:

If human writing ceases to be a viable career due to inadequate compensation, will LLMs lose access to fresh, high-quality training data? Could this create a feedback loop where AI models, trained on degraded outputs, stagnate?

This case also blurs the traditional divide between copyright and patents. Copyrighted material, once seen as static, now drives “follow-on” innovation derived from the original work. That is, the copyright protection in this case affects AI-content influenced by the copyrighted material in a way that previously applied to new technology that built on patented technical inventions. Thus, “access versus incentives” theory applies to copyright as much as it used to apply to patents. The Anthropic settlement signals that intellectual property law, lagging behind AI’s rapid evolution, must adapt.

Is AI learning just MOOCs again?

I created a provocative title for fun. Tyler pointed me to this podcast:

Joe Liemandt  – Building Alpha School, and The Future of Education (Apple podcast link)

I suppose I’m sold on their claim that most kids can learn basic facts and some academic skills from an iPad app. Listen all the way through if you are going to listen at all, because even some cracks in the tech product are revealed after the big pitch in the beginning.

I have been using Duolingo to review my high school French and Spanish. I think the few minutes a day I spend have helped drag some vocabulary back out of long-term storage. Although, as I recently heard a comedian say, “All my friends who have Duolingo are still speaking English to me.”

Folks should consider whether AI learning apps is just MOOCs again. Essentially, they need to get kids to watch (short, this time) videos of lecture content. MOOCs were longer lecture content videos. Maybe shorter is the key, combined with personalized feedback. Maybe not, for getting cheap effective comprehensive education that scales.

Last year I wrote Why Podcasts Succeeded in Gaining Influence Where MOOCs Failed

About half an hour in, Liemandt asserts that anyone in America would agree that kids learn life skills through “sports” not school. That’s an oversimplification, but I agree that sports ranks higher than “math class” for developing leadership ability.

Since they at Alpha School believe that have solved quickly learning facts, it’s interesting to hear how they do the rest of “education.” The school must fill enough time that the parents don’t have to see their kids half the day and also teach leadership/ communication/character. Alpha school is expensive ($40,000 a year) and there are many paid adults involved who are called “guides and coaches.”

The extracurriculars that Alpha school offers sounds a lot like what most kids can do in some form at a good public middle school or high school in America.  I wrote about the value of outside-class activities in college here: The Value of Student Organizations and On-Campus Education: Anecdotal Evidence from Tim Keller

My students at Samford are especially good at taking on leadership roles and creating a thriving community. Residential college provides a good testing ground for leadership and there are real “market tests” of success for things like sorority events, as the Alpha school encourages for older kids.

I applaud people trying to innovate. I think we’ll see more educational apps in schools, and that will be great. I’m not trying to dump on Alpha School. I just think the underperformance arc of MOOCs should temper our enthusiasm.

Students still need to learn principles

Sometimes I get weeks in the summer that are more research focused. This past week is very much a teaching and service focused week at my university. I haven’t had any time to ponder topics related to research or current events. So, I will share what I’ve been telling my fellow college educators. This will sound backward to some and like common sense to others. Feel free to comment with your thoughts.

College professors who teach 200-level or “principles” classes should not change all that much in response to AI. Students still need to know something. There need to be a few concepts and vocabulary words in their heads. For example, a person cannot use a calculator effectively if they do not know what a square root is at all.

I see highly trained mid-career professionals bragging about how they get ChatGPT to do their work. Can a 20-year-old do that if they don’t know what words to use in a prompt? How does vibe coding go for people who never learned to write out a single line of code? (not a question I have an expert answer to right now)

We should largely be sticking to the “old ways” and at least to some extent still require memorization. Having an exam on paper is a good way to ensure that the students can form coherent thoughts of their own, when possible.

Indeed, students might become AI jockeys when they get to the workplace. A 400-level class would be a good place for them to start heavily integrating AI tools to accomplish tasks and do projects. For anyone unfamiliar with American college categories, that would mean that an undergraduate might heavily use AI tools in their 4th and final year of study.

AI makes a great tutor for learning and enforcing principles, but it should not serve as a replacement test-taker. A human who cannot read and write will not be able to take full advantage of an intelligent machine in the next decade. Voice recognition is getting very good and the models are getting more agentic, so this might all change if we can keep the data centers on long enough. In the future, you might argue that having students write an exam answer by hand is as superfluous as teaching them to play the violin.

As of 2025, what you might see is some teachers who feel pressured to claim they are integrating AI more than they actually want to. A relative I talked to his summer in a corporate job told me that she feels intense pressure at work to be able to claim that she’s using AI. Anyone doesn’t have the appearance of embracing AI looks behind or expendable!

Will LLMs get us the Missing Data for Solving Physics?

Tyler suggested that a “smarter” LLM could not master the unconquered intellectual territory of integrating general relatively and quantum mechanics.

Forget passing Ph.D. level qualifying exams. (j/k James) Are the AI’s going to significantly surpass human efforts in generating new knowledge?

What exactly is the barrier to solving the fundamental mysteries of physics? How do we experimentally confirm that all matter breaks down to vibrating strings?

In a podcast episode of Within Reason, Brian Greene says that we can imagine an experiment that would test the proposed unifying String Theory. The Large Hadron Collider is not big enough (17 miles in circumference is too small). We would need a particle accelerator as big as a galaxy.

ChatGPT isn’t going to get us there. However, Brian Greene did suggest that there is a possibility that an advance in mathematics could get us closer to being able to work with the data we have.

Beh Yeoh summarized what he heard from Tyler et al. at a live event on how fast the acceleration in our knowledge will get boosted from AI. They warned that some areas will hit bottlenecks and therefore not advance very fast. Anything that require clinical trials, for example, isn’t going to proceed at breakneck speed. Ben warns that “Protein folding was a rare success” so we shouldn’t get too too excited about acceleration in biotech. If advances in physics require bigger and better physical tools to do more advanced experimental observations, then new AI might not get us far.

However, one of the categories that made Yeoh’s list of where new AI might accelerate progress is “mathematics,” because developing new theories does not face the same kind of physical constraints.

So, we are unlikely to obtain new definitive tests of String Theory to the extent that it is a capital-intensive field. The scenario for AI advances to bring a solution to this empirical question in my lifetime is probably if the solution comes from advances in mathematics so that we can reduce our reliance on new observational data.

Related links:
my article for the Gospel Coalition – We are not “building God,” despite some claims.
my article for EconLog – AI will be constrained by the same problem that David Hume faced. AI can predict what is likely to occur in the future based on what it has observed in the past.

“The big upward trend in Generative AI/LLM tool use in 2025 continues but may be slowing.” Have we reached a plataue, at least temporarily? Have we experienced the big upswing already in productivity, and it’s going to level out now? At least programming will be less painful forever after?

LLM Hallucination of Citations in Economics Persists with Web-Enabled Models” I realize that, as of today, you can pay for yet-better models than what we tested. But if web-enabled 4o can’t cite Krugman properly, you do wonder if “6o” will be integrating general relatively and quantum mechanics. A slightly longer context window probably isn’t going to do it.

Meta Is Poaching AI Talent With $100 Million Pay Packages; Will This Finally Create AGI?

This month I have run across articles noting that Meta’s Mark Zuckerberg has been making mind-boggling pay offers (like $100 million/year for 3-4 years) to top AI researchers at other companies, plus the promise of huge resources and even (gasp) personal access to Zuck, himself. Reports indicate that he is succeeding in hiring around 50 brains from OpenAI (home of ChatGPT), Anthropic, Google, and Apple. Maybe this concentration of human intelligence will result in the long-craved artificial general intelligence (AGI) being realized; there seems to be some recognition that the current Large Language Models will not get us there.

There are, of course, other interpretations being put on this maneuver. Some talking heads on a Bloomberg podcast speculated that Zuckerberg was using Meta’s mighty cash flow deliberately to starve competitors of top AI talent. They also speculated that (since there is a limit to how much money you can possibly, pleasurably spend) – – if you pay some guy $100 million in a year, a rational outcome would be he would quit and spend the rest of his life hanging out at the beach. (That, of course, is what Bloomberg finance types might think, who measure worth mainly in terms of money, not in the fun of doing cutting edge R&D).

I found a thread on reddit to be insightful and amusing, and so I post chunks of it below. Here is the earnest, optimist OP:

andsi2asi

Zuckerberg’s ‘Pay Them Nine-Figure Salaries’ Stroke of Genius for Building the Most Powerful AI in the World

Frustrated by Yann LeCun’s inability to advance Llama to where it is seriously competing with top AI models, Zuckerberg has decided to employ a strategy that makes consummate sense.

To appreciate the strategy in context, keep in mind that OpenAI expects to generate $10 billion in revenue this year, but will also spend about $28 billion, leaving it in the red by about $18 billion. My main point here is that we’re talking big numbers.

Zuckerberg has decided to bring together 50 ultra-top AI engineers by enticing them with nine-figure salaries. Whether they will be paid $100 million or $300 million per year has not been disclosed, but it seems like they will be making a lot more in salary than they did at their last gig with Google, OpenAI, Anthropic, etc.

If he pays each of them $100 million in salary, that will cost him $5 billion a year. Considering OpenAI’s expenses, suddenly that doesn’t sound so unreasonable.

I’m guessing he will succeed at bringing this AI dream team together. It’s not just the allure of $100 million salaries. It’s the opportunity to build the most powerful AI with the most brilliant minds in AI. Big win for AI. Big win for open source

And here are some wry responses:

kayakdawg

counterpoint 

a. $5B is just for those 50 researchers, loootttaaa other costs to consider

b. zuck has a history of burning big money on r&d with theoretical revenue that doesnt materialize

c. brooks law: creating agi isn’t an easily divisible job – in fact, it seems reasonable to assume that the more high-level experts enter the project the slower it’ll progress given the communication overhead

7FootElvis

Exactly. Also, money alone doesn’t make leadership effective. OpenAI has a relatively single focus. Meta is more diversified, which can lead to a lack of necessary vision in this one department. Passion, if present at the top, is also critical for bleeding edge advancement. Is Zuckerberg more passionate than Altman about AI? Which is more effective at infusing that passion throughout the organization?

….

dbenc

and not a single AI researcher is going to tell Zuck “well, no matter how much you pay us we won’t be able to make AGI”

meltbox

I will make the AI by one year from now if I am paid $100m

I just need total blackout so I can focus. Two years from now I will make it run on a 50w chip.

I promise

Hallucination as a User Error

You don’t use a flat head screwdriver to drill a hole in a board. You should know to use a drill.

I appreciate getting feedback on our manuscript, “LLM Hallucination of Citations in Economics Persists with Web-Enabled Models,” via X/Twitter. @_jannalulu wrote: “that paper only tested 4o (which arguably is a bad enough model that i almost never use it).”

Since the scope and frequency of hallucinations came as a surprise to many LLM users, they have often been used as a ‘gotcha’ to criticize AI optimists. People, myself included, have sounded the alarm that hallucinations could infiltrate articles, emails, and medical diagnoses.

The feedback I got from power users on Twitter this week made me think that there might be a cultural shift in the medium term. (Yes, we are always looking for someone to blame.) Hallucinations will be considered the fault of the human user who should have:

  1. Used a better model (learn your tools)
  2. Written a better prompt (learn how to use your tools)
  3. Assigned the wrong task to LLMs (it’s been known for over 2 years that general LLM models hallucinate citations). What did you expect from “generative” AI? LLMs are telling you what literature ought to exist as opposed to what does exist.

My Perfunctory Intern

A couple years ago, my Co-blogger Mike described his productive, but novice intern. The helper could summarize expert opinion, but they had no real understanding of their own. To boot, they were fast and tireless. Of course, he was talking about ChatGPT. Joy has also written in multiple places about the errors made by ChatGPT, including fake citations.

I use ChatGPT Pro, which has Web access and my experience is that it is not so tireless. Much like Mike, I have used ChatGPT to help me write Python code. I know the basics of python, and how to read a lot of of it. However, the multitude of methods and possible arguments are not nestled firmly in my skull. I’m much faster at reading, rather than writing Python code. Therefore, ChatGPT has been amazing… Mostly.

I have found that ChatGPT is more like an intern than many suppose:

Continue reading

Counting Hallucinations by Web-Enabled LLMs

In 2023, we gathered the data for what became “ChatGPT Hallucinates Nonexistent Citations: Evidence from Economics.” Since then, LLM use has increased. A 2025 survey from Elon University estimates that half of Americans now use LLMs. In the Spring of 2025, we used the same prompts, based on the JEL categories, to obtain a comprehensive set of responses from LLMs about topics in economics.

Our new report on the state of citations is available at SSRN: “LLM Hallucination of Citations in Economics Persists with Web-Enabled Models

What did we find? Would you expect the models to have improved since 2023? LLMs have gotten better and are passing ever more of what used to be considered difficult tests. (Remember the Turing Test? Anyone?) ChatGPT can pass the bar exam for new lawyers. And yet, if you ask ChatGPT to write a document in the capacity of a lawyer, it will keep making the mistake of hallucinating fake references. Hence, we keep seeing headlines like, “A Utah lawyer was punished for filing a brief with ‘fake precedent’ made up by artificial intelligence

What we call GPT-4o WS (Web Search) in the figure below was queried in April 2025. This “web-enabled” language model is enhanced with real-time internet access, allowing it to retrieve up-to-date information rather than relying solely on static training data. This means it can answer questions about current events, verify facts, and provide live data—something traditional models, which are limited to their last training cutoff, cannot do. While standard models generate responses based on patterns learned from past data, web-enabled models can supplement that with fresh, sourced content from the web, improving accuracy for time-sensitive or niche topics.

At least one third of the references provided by GPT-4o WS were not real! Performance has not significantly improved to the point where AI can write our papers with properly incorporated attribution of ideas. We also found that the web-enabled model would pull from lower quality sources like Investopedia even when we explicitly stated in the prompt, “include citations from published papers. Provide the citations in a separate list, with author, year in parentheses, and journal for each citation.” Even some of the sources that were not journal articles were cited incorrectly. We provide specific examples in our paper.

In closing, consider this quote from an interview with Jack Clark, co-founder of Anthropic:

The best they had was a 60 percent success rate. If I have my baby, and I give her a robot butler that has a 60 percent accuracy rate at holding things, including the baby, I’m not buying the butler.

Illusions of Illusions of Reasoning

Even since Scott’s post on Tuesday of this week, a new response has been launched titled “The Illusion of the Illusion of the Illusion of Thinking

Abstract (emphasis added by me): A recent paper by Shojaee et al. (2025), The Illusion of Thinking, presented evidence of an “accuracy collapse” in Large Reasoning Models (LRMs), suggesting fundamental limitations in their reasoning capabilities when faced with planning puzzles of increasing complexity. A compelling critique by Opus and Lawsen (2025), The Illusion of the Illusion of Thinking, argued these findings are not evidence of reasoning failure but rather artifacts of flawed experimental design, such as token limits and the use of unsolvable problems. This paper provides a tertiary analysis, arguing that while Opus and Lawsen correctly identify critical methodological flaws that invalidate the most severe claims of the original paper, their own counter-evidence and conclusions may oversimplify the nature of model limitations. By shifting the evaluation from sequential execution to algorithmic generation, their work illuminates a different, albeit important, capability. We conclude that the original “collapse” was indeed an illusion created by experimental constraints, but that Shojaee et al.’s underlying observations hint at a more subtle, yet real, challenge for LRMs: a brittleness in sustained, high-fidelity, step-by-step execution. The true illusion is the belief that any single evaluation paradigm can definitively distinguish between reasoning, knowledge retrieval, and pattern execution.

As am writing a new manuscript about hallucination of web-enabled models, this is close to what I am working on. Conjuring up fake academic references might point to a lack of true reasoning ability.

Do Pro and Dantas believe that LLMs can reason? What they are saying, at least, is that evaluating AI reasoning is difficult. In their words, the whole back-and-forth “highlights a key challenge in evaluation: distinguishing true, generalizable reasoning from sophisticated pattern matching of familiar problems…”

The fact that the first sentence of the paper contains the bigram “true reasoning” is interesting in itself. No one dobuts that LLMs are reasoning anymore, at least within their own sandboxes. Hence there have been Champagne jokes going around of this sort:

If you’d like to read a response coming from o3 itself, Tyler pointed me to this: