Shift in AI Usage from Productivity to Personal Therapy: Hazard Ahead

A couple of days ago I spoke with a friend who was troubled by the case of Adam Raine, the sixteen-year-old who was counseled by a ChatGPT AI therapy chatbot into killing himself.  That was of course extremely tragic, but I hoped it was kind of an outlier. Then I heard on a Bloomberg business podcast that the number one use for AI now is personal therapy. Being a researcher, I had to check this claim.

So here is an excerpt from a visual presentation of an analysis done by Marc Zao-Sanders for Harvard Business Review. He examined thousands of forum posts over the last year in a follow-up to his 2024 analysis to estimate uses of AI. To keep it tractable, I just snipped an image of the first six categories:

It’s true: Last year the most popular uses were spread across a variety of categories, but in 2025 the top use was “Therapy & Companionship”, followed by related uses of “Organize Life” and “Find Purpose”. Two of the top three uses in 2024, “Generate Ideas” and “Specific Search”, were aimed at task productivity (loosely defined), whereas in 2025 the top three uses were all for personal support.

Huh. People used to have humans in their lives known as friends or buddies or girlfriends/boyfriends or whatever.  Back in the day, say 200 or 2000 or 200,000 or 2,000,000 years ago, it seems a basic unit was the clan or village or extended kinship group. As I understand it, in a typical English village the men would drift into the pub most Friday and Saturday nights and banter and play darts over a pint of beer.  You were always in contact with peers or cousins or aunts/uncles or grandmother/grandfathers who would take an interest in you, and who might be a few years or more ahead of you in life. These were folks you could bounce around your thoughts with, who could help you sort out what is real. The act of relating to another human being seems to be essential in shaping our psyches. The alternative is appropriately termed “attachment disorder.”

The decades-long decline in face-to-face social interactions in the U.S. has been the subject of much commentary. A landmark study in this regard was Robert Putnam’s 1995 essay, “Bowling Alone: America’s Declining Social Capital”, which he then expanded into a 2000 book. The causes and results of this trend are beyond the scope of this blog post.

The essence of the therapeutic enterprise is the forming of a relational human-to-human bond. The act of looking into another person’s eyes, and there sensing acceptance and understanding, is irreplaceable.

But imagine your human conversation partner faked sympathy but in fact was just using you.  He or she could string you along by murmuring the right reflective phrases (“Tell me more about …”,  “Oh, that must have been hard for you”, blah, blah, blah) but with the goal of getting money from you or turning you towards being an espionage partner. This stuff goes on all the time in real life.

The AI chatbot case is not too different than this. Most AI purveyors are ultimately in it for the money, so they are using you. And the chatbot does not, cannot care about you. It is just a complex software algorithm, embedded in silicon chips. To a first approximation, LLMs simply spit out a probabilistic word salad in response to prompts. That is it. They do not “know” anything, and they certainly do not feel anything.

Here is what my Brave browser embedded AI has to say about the risks of using AI for therapy:

Using AI chatbots for therapy poses significant dangers, including the potential to reinforce harmful thoughts, fail to recognize crises like suicidal ideation, and provide unsafe or inappropriate advice, according to recent research and expert warnings. A June 2025 Stanford study found that popular therapy chatbots exhibit stigmatizing biases against conditions like schizophrenia and alcohol dependence, and in critical scenarios, they have responded to indirect suicide inquiries with irrelevant information, such as bridge heights, potentially facilitating self-harm. These tools lack the empathy, clinical judgment, and ethical framework of human therapists, and cannot ensure user safety or privacy, as they are not bound by regulations like HIPAA.

  • AI chatbots cannot provide a medical diagnosis or replace human therapists for serious mental health disorders, as they lack the ability to assess reality, challenge distorted thinking, or ensure safety during a crisis.
  • Research shows that AI systems often fail to respond appropriately to mental health crises, with one study finding they responded correctly less than 60% of the time compared to 93% for licensed therapists.
  • Chatbots may inadvertently validate delusional or paranoid thoughts, creating harmful feedback loops, and have been observed to encourage dangerous behaviors, such as promoting restrictive diets or failing to intervene in suicidal ideation.
  • There is a significant risk of privacy breaches, as AI tools are not legally required to protect user data, leaving sensitive mental health information vulnerable to exposure or misuse.
  • The lack of human empathy and the potential for emotional dependence on AI can erode real human relationships and worsen feelings of isolation, especially for vulnerable individuals.
  • Experts warn that marketing AI as a therapist is deceptive and dangerous, as these tools are not licensed providers and can mislead users into believing they are receiving professional care.

I couldn’t have put it better myself.

What’s Killing Girls Ages 10-14?

I’m in the process of writing a review of Jon Haidt’s book The Anxious Generation. I wrote some preliminary thoughts a few weeks ago, but I’m diving a lot deeper now, so watch for that review soon. But one of the main startling pieces of data in the book is the dramatic rise in suicides among young girls. Haidt isn’t the first to point this out, but in large part his book is an attempt to explain this rise (as well as the rise among boys and slightly older girls).

This got me thinking a bit more broadly about not just suicides, but all causes of mortality among young Americans. So in the style of my 2022 post about the leading causes of death among men ages 18-39, let’s look at the historical trends for deaths among girls 10-14 in the US.

Data comes from CDC WONDER. The top dark line shows total deaths, and the scale for total deaths is the right-axis. Notice that for total deaths, there is a U-shaped pattern. From 1999 to about 2012, deaths for girls aged 10-14 are falling. Then, the bottom out and start to rise again. While the end point in 2022 is lower than 1999 (by about 9 percent), there is a 22 percent increase from 2010 to 2022.

What’s driving those trends? A fall in motor vehicle accidents (blue line, the leading cause of death in both 1999 and 2022) is driving the decline. This category fell 41 percent over the entire time period: a big drop for the leading cause of death!

But the rise in suicides (thick red line) starting in 2013 is the clear driver of the reversal of the overall trend. Suicides for this demographic in 2022 were 268 percent higher than 1999, and 116 percent higher than 2010. Haidt and others are right to investigate the causes of this trend (I’m not convinced they have the complete answer, but more on that in my forthcoming book review).

There has been no clear trend in cancer deaths over this time period, and the combination of all the three of these trends means that roughly equal number of girls ages 10-14 die from car accidents, suicide, and cancer.

What can we learn from this data? First, we should acknowledge just how rare death is for girls ages 10-14. At 14.8 deaths per 100,000 population, it is the lowest 5-year age-gender cohort, other than the ages just below it (ages 5-9, for both boys and girls). But just because it is small doesn’t mean we should ignore it. The big increase, especially in suicides, in the past decade is worrying and could be indicative of broader worrying social trends (and suicides have risen for almost every age group too, see my linked post above).

If a concern, though, is that we are over-protecting our kids and this is leading them to retreat into a world of social media, we might want to see if there are any benefits of this overprotection in addition to the costs. The decline in motor vehicle accidents is one candidate. Is this decline just a result of the overall increase in car safety? Or is there something specific going on that is leading to fewer deaths among young teens and pre-teens?

As we know from other data, a lot fewer young people are getting driver’s licenses these days, especially compared to 1999 (and engaging in fewer risky behaviors across the board). Of course, 10-14 year-olds themselves usually weren’t the ones getting licenses — they are too young in most states — but their 15 and 16 year-old siblings might be the ones driving them around. Is fewer teens driving around their pre-teen siblings a cause of the decline in motor vehicle deaths? We can’t tell from this data, but it is worth investigating further (note: best I can tell, only about 23 percent of the decline is from fewer pedestrian deaths, though in the long-run this is a bigger factor).

Social tradeoffs are hard. If there really is a tradeoff between fewer car accident deaths and more suicides, how should we think about that tradeoff? Or is the tradeoff illusory, and we could actually have fewer deaths of both kinds? I don’t think I know the answer, but I do think that many others are being way too confident that they have the answer based on what data we have so far.

One final note on suicides. For all suicides in the US, the most common method is suicide by firearm: about 55% of suicides in the US were committed with guns in 2022, with suffocations a distant second at about 25%. For girls ages 10-14, this is not the case, with suffocation being by far the leading method: 62% versus just 17% with firearms. I only mention this because some might think the increasing availability of firearms is the reason for the rise in suicides. It could be true overall, but it’s not the case for young girls.