Shift in AI Usage from Productivity to Personal Therapy: Hazard Ahead

A couple of days ago I spoke with a friend who was troubled by the case of Adam Raine, the sixteen-year-old who was counseled by a ChatGPT AI therapy chatbot into killing himself.  That was of course extremely tragic, but I hoped it was kind of an outlier. Then I heard on a Bloomberg business podcast that the number one use for AI now is personal therapy. Being a researcher, I had to check this claim.

So here is an excerpt from a visual presentation of an analysis done by Marc Zao-Sanders for Harvard Business Review. He examined thousands of forum posts over the last year in a follow-up to his 2024 analysis to estimate uses of AI. To keep it tractable, I just snipped an image of the first six categories:

It’s true: Last year the most popular uses were spread across a variety of categories, but in 2025 the top use was “Therapy & Companionship”, followed by related uses of “Organize Life” and “Find Purpose”. Two of the top three uses in 2024, “Generate Ideas” and “Specific Search”, were aimed at task productivity (loosely defined), whereas in 2025 the top three uses were all for personal support.

Huh. People used to have humans in their lives known as friends or buddies or girlfriends/boyfriends or whatever.  Back in the day, say 200 or 2000 or 200,000 or 2,000,000 years ago, it seems a basic unit was the clan or village or extended kinship group. As I understand it, in a typical English village the men would drift into the pub most Friday and Saturday nights and banter and play darts over a pint of beer.  You were always in contact with peers or cousins or aunts/uncles or grandmother/grandfathers who would take an interest in you, and who might be a few years or more ahead of you in life. These were folks you could bounce around your thoughts with, who could help you sort out what is real. The act of relating to another human being seems to be essential in shaping our psyches. The alternative is appropriately termed “attachment disorder.”

The decades-long decline in face-to-face social interactions in the U.S. has been the subject of much commentary. A landmark study in this regard was Robert Putnam’s 1995 essay, “Bowling Alone: America’s Declining Social Capital”, which he then expanded into a 2000 book. The causes and results of this trend are beyond the scope of this blog post.

The essence of the therapeutic enterprise is the forming of a relational human-to-human bond. The act of looking into another person’s eyes, and there sensing acceptance and understanding, is irreplaceable.

But imagine your human conversation partner faked sympathy but in fact was just using you.  He or she could string you along by murmuring the right reflective phrases (“Tell me more about …”,  “Oh, that must have been hard for you”, blah, blah, blah) but with the goal of getting money from you or turning you towards being an espionage partner. This stuff goes on all the time in real life.

The AI chatbot case is not too different than this. Most AI purveyors are ultimately in it for the money, so they are using you. And the chatbot does not, cannot care about you. It is just a complex software algorithm, embedded in silicon chips. To a first approximation, LLMs simply spit out a probabilistic word salad in response to prompts. That is it. They do not “know” anything, and they certainly do not feel anything.

Here is what my Brave browser embedded AI has to say about the risks of using AI for therapy:

Using AI chatbots for therapy poses significant dangers, including the potential to reinforce harmful thoughts, fail to recognize crises like suicidal ideation, and provide unsafe or inappropriate advice, according to recent research and expert warnings. A June 2025 Stanford study found that popular therapy chatbots exhibit stigmatizing biases against conditions like schizophrenia and alcohol dependence, and in critical scenarios, they have responded to indirect suicide inquiries with irrelevant information, such as bridge heights, potentially facilitating self-harm. These tools lack the empathy, clinical judgment, and ethical framework of human therapists, and cannot ensure user safety or privacy, as they are not bound by regulations like HIPAA.

  • AI chatbots cannot provide a medical diagnosis or replace human therapists for serious mental health disorders, as they lack the ability to assess reality, challenge distorted thinking, or ensure safety during a crisis.
  • Research shows that AI systems often fail to respond appropriately to mental health crises, with one study finding they responded correctly less than 60% of the time compared to 93% for licensed therapists.
  • Chatbots may inadvertently validate delusional or paranoid thoughts, creating harmful feedback loops, and have been observed to encourage dangerous behaviors, such as promoting restrictive diets or failing to intervene in suicidal ideation.
  • There is a significant risk of privacy breaches, as AI tools are not legally required to protect user data, leaving sensitive mental health information vulnerable to exposure or misuse.
  • The lack of human empathy and the potential for emotional dependence on AI can erode real human relationships and worsen feelings of isolation, especially for vulnerable individuals.
  • Experts warn that marketing AI as a therapist is deceptive and dangerous, as these tools are not licensed providers and can mislead users into believing they are receiving professional care.

I couldn’t have put it better myself.

Meta AI Chief Yann LeCun Notes Limits of Large Language Models and Path Towards Artificial General Intelligence

We noted last week Meta’s successful efforts to hire away the best of the best AI scientists from other companies, by offering them insane (like $300 million) pay packages. Here we summarize and excerpt an excellent article in Newsweek by Gabriel Snyder who interviewed Meta’s chief AI scientist, Yann LeCun. LeCun discusses some inherent limitations of today’s Large Language Models (LLMs) like ChatGPT. Their limitations stem from the fact that they are based mainly on language; it turns out that human language itself is a very constrained dataset.  Language is readily manipulated by LLMs, but language alone captures only a small subset of important human thinking:

Returning to the topic of the limitations of LLMs, LeCun explains, “An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that’s clearly System 1—it’s reactive, right? There’s no reasoning,” a reference to Daniel Kahneman’s influential framework that distinguishes between the human brain’s fast, intuitive method of thinking (System 1) and the method of slower, more deliberative reasoning (System 2).

The limitations of this approach become clear when you consider what is known as Moravec’s paradox—the observation by computer scientist and roboticist Hans Moravec in the late 1980s that it is comparatively easier to teach AI systems higher-order skills like playing chess or passing standardized tests than seemingly basic human capabilities like perception and movement. The reason, Moravec proposed, is that the skills derived from how a human body navigates the world are the product of billions of years of evolution and are so highly developed that they can be automated by humans, while neocortical-based reasoning skills came much later and require much more conscious cognitive effort to master. However, the reverse is true of machines. Simply put, we design machines to assist us in areas where we lack ability, such as physical strength or calculation.

The strange paradox of LLMs is that they have mastered the higher-order skills of language without learning any of the foundational human abilities. “We have these language systems that can pass the bar exam, can solve equations, compute integrals, but where is our domestic robot?” LeCun asks. “Where is a robot that’s as good as a cat in the physical world? We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”

This gap exists because language, for all its complexity, operates in a relatively constrained domain compared to the messy, continuous real world. “Language, it turns out, is relatively simple because it has strong statistical properties,” LeCun says. It is a low-dimensionality, discrete space that is “basically a serialized version of our thoughts.”  

[Bolded emphases added]

Broad human thinking involves hierarchical models of reality, which get constantly refined by experience:

And, most strikingly, LeCun points out that humans are capable of processing vastly more data than even our most data-hungry advanced AI systems. “A big LLM of today is trained on roughly 10 to the 14th power bytes of training data. It would take any of us 400,000 years to read our way through it.” That sounds like a lot, but then he points out that humans are able to take in vastly larger amounts of visual data.

Consider a 4-year-old who has been awake for 16,000 hours, LeCun suggests. “The bandwidth of the optic nerve is about one megabyte per second, give or take. Multiply that by 16,000 hours, and that’s about 10 to the 14th power in four years instead of 400,000.” This gives rise to a critical inference: “That clearly tells you we’re never going to get to human-level intelligence by just training on text. It’s never going to happen,” LeCun concludes…

This ability to apply existing knowledge to novel situations represents a profound gap between today’s AI systems and human cognition. “A 17-year-old can learn to drive a car in about 20 hours of practice, even less, largely without causing any accidents,” LeCun muses. “And we have millions of hours of training data of people driving cars, but we still don’t have self-driving cars. So that means we’re missing something really, really big.”

Like Brooks, who emphasizes the importance of embodiment and interaction with the physical world, LeCun sees intelligence as deeply connected to our ability to model and predict physical reality—something current language models simply cannot do. This perspective resonates with David Eagleman’s description of how the brain constantly runs simulations based on its “world model,” comparing predictions against sensory input. 

For LeCun, the difference lies in our mental models—internal representations of how the world works that allow us to predict consequences and plan actions accordingly. Humans develop these models through observation and interaction with the physical world from infancy. A baby learns that unsupported objects fall (gravity) after about nine months; they gradually come to understand that objects continue to exist even when out of sight (object permanence). He observes that these models are arranged hierarchically, ranging from very low-level predictions about immediate physical interactions to high-level conceptual understandings that enable long-term planning.

[Emphases added]

(Side comment: As an amateur reader of modern philosophy, I cannot help noting that these observations about the importance of recognizing there is a real external world and adjusting one’s models to match that reality call into question the epistemological claim that “we each create our own reality”.)

Given all this, developing the next generation of artificial intelligence must, like human intelligence, embed layers of working models of the world:

So, rather than continuing down the path of scaling up language models, LeCun is pioneering an alternative approach of Joint Embedding Predictive Architecture (JEPA) that aims to create representations of the physical world based on visual input. “The idea that you can train a system to understand how the world works by training it to predict what’s going to happen in a video is a very old one,” LeCun notes. “I’ve been working on this in some form for at least 20 years.”

The fundamental insight behind JEPA is that prediction shouldn’t happen in the space of raw sensory inputs but rather in an abstract representational space. When humans predict what will happen next, we don’t mentally generate pixel-perfect images of the future—we think in terms of objects, their properties and how they might interact

This approach differs fundamentally from how language models operate. Instead of probabilistically predicting the next token in a sequence, these systems learn to represent the world at multiple levels of abstraction and to predict how their representations will evolve under different conditions.

And so, LeCun is strikingly pessimistic on the outlook for breakthroughs in the current LLM’s like ChatGPT. He believes LLMs will be largely obsolete within five years, except for narrower purposes, and so he tells upcoming AI scientists to not even bother with them:

His belief is so strong that, at a conference last year, he advised young developers, “Don’t work on LLMs. [These models are] in the hands of large companies, there’s nothing you can bring to the table. You should work on next-gen AI systems that lift the limitations of LLMs.”

This approach seems to be at variance with other firms, who continue to pour tens of billions of dollars into LLMs. Meta, however, seems focused on next-generation AI, and CEO Mark Zuckerberg is putting his money where his mouth is.

Did Apple’s Recent “Illusion of Thinking” Study Expose Fatal Shortcomings in Using LLM’s for Artificial General Intelligence?

Researchers at Apple last week published with the provocative title, “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity.”  This paper has generated uproar in the AI world. Having “The Illusion of Thinking” right there in the title is pretty in-your-face.

Traditional Large Language Model (LLM) artificial intelligence programs like ChatGPT train on massive amounts of human-generated text to be able to mimic human outputs when given prompts. A recent trend (mainly starting in 2024) has been the incorporation of more formal reasoning capabilities into these models. The enhanced models are termed Large Reasoning Models (LRMs). Now some leading LLMs like Open AI’s GPT, Claude, and the Chinese DeepSeek exist both in regular LLM form and also as LRM versions.

The authors applied both the regular (LLM) and “thinking” LRM versions of Claude 3.7 Sonnet and DeepSeek to a number of mathematical type puzzles. Open AI’s o-series were used to a lesser extent. An advantage of these puzzles is that researchers can, while keeping the basic form of the puzzle, dial in more or less complexity.

They found, among other things, that the LRMs did well up to a certain point, then suffered “complete collapse” as complexity was increased. Also, at low complexities, LLMs actually outperform LRMs. And (perhaps the most vivid evidence of lack of actual understanding on the part of these programs), when they were explicitly offered an efficient direct solution algorithm in the prompt, the programs did not take advantage of it, but instead just kept grinding away in their usual fashion.

As might be expected, AI skeptics were all over the blogosphere, saying, I told you so, LLMs are just massive exercises in pattern matching, and cannot extrapolate outside of their training set. This has massive implications for what we can expect in the near or intermediate future. Among other things, the optimism about AI progress is largely what is fueling the stock market, and also capital investment in this area: Companies like Meta and Google are spending ginormous sums trying to develop artificial “general” intelligence, paying for ginormous amounts of compute power, with those dollars flowing to firms like Microsoft and Amazon building out data centers and buying chips from Nvidia. If the AGI emperor has no clothes, all this spending might come to a screeching crashing halt.

Ars Technica published a fairly balanced account of the controversy, concluding that, “Even elaborate pattern-matching machines can be useful in performing labor-saving tasks for the people that use them… especially for coding and brainstorming and writing.”

Comments on this article included one like:

LLMs do not even know what the task is, all it knows is statistical relationships between words.   I feel like I am going insane. An entire industry’s worth of engineers and scientists are desperate to convince themselves a fancy Markov chain trained on all known human texts is actually thinking through problems and not just rolling the dice on what words it can link together.

And

if we equate combinatorial play and pattern matching with genuinely “generative/general” intelligence, then we’re missing a key fact here. What’s missing from all the LLM hubris and enthusiasm is a reflexive consciousness of the limits of language, of the aspects of experience that exceed its reach and are also, paradoxically, the source of its actual innovations. [This is profound, he means that mere words, even billions of them, cannot capture some key aspects of human experience]

However, the AI bulls have mounted various come-backs to the Apple paper. The most effective I know of so far was published by Alex Lawsen, a researcher at LLM firm Open Philanthropy. Lawsen’s rebuttal, titled “The Illusion of the Illusion of Thinking,  was summarized by Marcus Mendes. To summarize the summary, Lawsen claimed that the models did not in general “collapse” in some crazy way. Rather, the models in many cases recognized that they would not be able to solve the puzzles given the constraints input by the Apple researchers. Therefore, they (rather intelligently) did not try to waste compute power by grinding away to a necessarily incomplete solution, but just stopped. Lawsen further showed that the ways Apple ran the LRM models did not allow them to perform as well as they could. When he made a modest, reasonable change in the operation of the LRMs,

Models like Claude, Gemini, and OpenAI’s o3 had no trouble producing algorithmically correct solutions for 15-disk Hanoi problems, far beyond the complexity where Apple reported zero success.

Lawsen’s conclusion: When you remove artificial output constraints, LRMs seem perfectly capable of reasoning about high-complexity tasks. At least in terms of algorithm generation.

And so, the great debate over the prospects of artificial general intelligence will continue.

Free Webinar, Jan. 25: Practical and Ethical Aspects of Future Artificial Intelligence

As most of us know, artificial intelligence (AI) has taken big steps forward in the past few years, with the advent of Large Language Models (LLM) like ChatGPT. With these programs, you can enter a query in plain language, and get a lengthy response in human-like prose. You can have ChatGPT write a computer program or a whole essay for you (which of course makes it challenging for professors to evaluate essays handed in by their students).

However, the lords of Big Tech are not content. Their goal is to create AI with powers that far surpass human intelligence, and that even mimics human empathy. This raises a number of questions:

Is this technically possible? What will be the consequences if some corporations or nations succeed in owning such powerful systems? Will the computers push us bumbling humans out of the way? Will this be a tool for liberation or for oppression? This new technology coming at us may affect us all in unexpected ways. 

For those who are interested, there will be a 75-minute webinar on Saturday, January 25 which addresses these issues, and offers a perspective by two women who are leaders in the AI field (see bios below). They will explore the ethical and practical aspects of AI of the future, from within a Christian tradition. The webinar is free, but requires pre-registration:

Here are bios of the two speakers:

Joanna Ng is a former IBM-er, pivoted to a start-up founder, focusing on Artificial Intelligence, specialized in Augmented Cognition, by integrating with IoT and Blockchain, in the context of web3, by applying design-thinking methodology. With forty-nine patents granted to her name, Joanna was accredited as an IBM Master Inventor. She held a seven-year tenure as the Head of Research, Director of the Center for Advanced Studies, IBM Canada. She has published over twenty peer-reviewed academic publications and co-authored two computer science books with Springer, The Smart Internet, and The Personal Web. She published a Christianity Today article called “How Artificial Intelligence Is Today’s Tower of Babel” and published her first book on faith and discipleship in October 2022, titled Being Christian 2.0.

Rosalind Picard is founder and director of the Affective Computing Research Group at the MIT Media Laboratory; co-founder of Affectiva, which provides Emotion AI; and co-founder and chief scientist of Empatica, which provides the first FDA-cleared smartwatch to detect seizures. Picard is author of over three hundred peer-reviewed articles spanning AI, affective computing, and medicine. She is known internationally for writing the book, Affective Computing, which helped launch the field by that name, and she is a popular speaker, with a TED talk receiving ~1.9 million views. Picard is a fellow of the IEEE and the AAAC, and a member of the National Academy of Engineering. She holds a Bachelors in Electrical Engineering from Georgia Tech and a Masters and Doctorate, each in Electrical Engineering and Computer Science, from MIT. Picard leads a team of researchers developing AI/machine learning and analytics to advance basic science as well as to improve human health and well-being, and has served as MIT’s faculty chair of their MindHandHeart well-being initiative.

Study Shows AI Can Enable Information-Stealing (Phishing) Campaigns

As a computer user, I make a modest effort to stay informed regarding the latest maneuvers by the bad guys to steal information and money. I am on a mailing list for the Malwarebytes blog, which publishes maybe three or four stories a week in this arena.

Here are three stories from the latest Malwarebytes email:

 ( 1 )   AI-supported spear phishing fools more than 50% of targets A controlled study reveals that 54% of users were tricked by AI-supported spear phishing emails, compared to just 12% who were targeted by traditional, human-crafted ones. ( 2 )  Dental group lied through teeth about data breach, fined $350,000 Westend Dental denied a 2020 ransomware attack and associated data breach, telling its customers that their data was lost due to an “accidentally formatted hard drive”. The company agreed to pay $350,000 to settle HIPAA violations ( 3 ) “Can you try a game I made?” Fake game sites lead to information stealers Victims lured to a fake game website where they were met with an information stealer instead of the promised game.

The first item here fits with our interest in the promise and perils of AI, so I will paste a couple of self-explanatory excerpts in italics:

One of the first things everyone predicted when artificial intelligence (AI) became more commonplace was that it would assist cybercriminals in making their phishing campaigns more effective.

Now, researchers have conducted a scientific study into the effectiveness of AI supported spear phishing, and the results line up with everyone’s expectations: AI is making it easier to do crimes.

The study, titled Evaluating Large Language Models’ Capability to Launch Fully Automated Spear Phishing Campaigns: Validated on Human Subjects, evaluates the capability of large language models (LLMs) to conduct personalized phishing attacks and compares their performance with human experts and AI models from last year.

To this end the researchers developed and tested an AI-powered tool to automate spear phishing campaigns. They used AI agents based on GPT-4o and Claude 3.5 Sonnet to search the web for available information on a target and use this for highly personalized phishing messages.

With these tools, the researchers achieved a click-through rate (CTR) that marketing departments can only dream of, at 54%. The control group received arbitrary phishing emails and achieved a CTR of 12% (roughly 1 in 8 people clicked the link).

Another group was tested against an email generated by human experts which proved to be just as effective as the fully AI automated emails and got a 54% CTR. But the human experts did this at 30 times the cost of the AI automated tools.

…The key to the success of a phishing email is the level of personalization that can be achieved by the AI assisted method and the base for that personalization can be provided by an AI web-browsing agent that crawls publicly available information.

Based on information found online about the target, they are invited to participate in a project that aligns with their interest and presented with a link to a site where they can find more details.

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~

But there is good news as well. We can use AI to fight AI: … LLMs are also getting better at recognizing phishing emails. Claude 3.5 Sonnet scored well above 90% with only a few false alarms and detected several emails that passed human detection. Although it struggles with some phishing emails that are clearly suspicious to most humans.

In addition, the blog article cited some hard evidence for year-over-year progress in AI capabilities: a year ago, unassisted AI was unable to match the phishing performance of human-generated phishing messages. But now, AI can match and even slightly exceed the effectiveness of human phishing. This is….progress, I guess.

P.S. I’d feel remiss if I did not remind us all yet again, it’s safest to never click on a link embedded in an email message, if you can avoid it. If the email purports to be from a company, it’s safest to go directly to the company’s website and do your business there.