Study Shows AI Can Enable Information-Stealing (Phishing) Campaigns

As a computer user, I make a modest effort to stay informed regarding the latest maneuvers by the bad guys to steal information and money. I am on a mailing list for the Malwarebytes blog, which publishes maybe three or four stories a week in this arena.

Here are three stories from the latest Malwarebytes email:

 ( 1 )   AI-supported spear phishing fools more than 50% of targets A controlled study reveals that 54% of users were tricked by AI-supported spear phishing emails, compared to just 12% who were targeted by traditional, human-crafted ones. ( 2 )  Dental group lied through teeth about data breach, fined $350,000 Westend Dental denied a 2020 ransomware attack and associated data breach, telling its customers that their data was lost due to an “accidentally formatted hard drive”. The company agreed to pay $350,000 to settle HIPAA violations ( 3 ) “Can you try a game I made?” Fake game sites lead to information stealers Victims lured to a fake game website where they were met with an information stealer instead of the promised game.

The first item here fits with our interest in the promise and perils of AI, so I will paste a couple of self-explanatory excerpts in italics:

One of the first things everyone predicted when artificial intelligence (AI) became more commonplace was that it would assist cybercriminals in making their phishing campaigns more effective.

Now, researchers have conducted a scientific study into the effectiveness of AI supported spear phishing, and the results line up with everyone’s expectations: AI is making it easier to do crimes.

The study, titled Evaluating Large Language Models’ Capability to Launch Fully Automated Spear Phishing Campaigns: Validated on Human Subjects, evaluates the capability of large language models (LLMs) to conduct personalized phishing attacks and compares their performance with human experts and AI models from last year.

To this end the researchers developed and tested an AI-powered tool to automate spear phishing campaigns. They used AI agents based on GPT-4o and Claude 3.5 Sonnet to search the web for available information on a target and use this for highly personalized phishing messages.

With these tools, the researchers achieved a click-through rate (CTR) that marketing departments can only dream of, at 54%. The control group received arbitrary phishing emails and achieved a CTR of 12% (roughly 1 in 8 people clicked the link).

Another group was tested against an email generated by human experts which proved to be just as effective as the fully AI automated emails and got a 54% CTR. But the human experts did this at 30 times the cost of the AI automated tools.

…The key to the success of a phishing email is the level of personalization that can be achieved by the AI assisted method and the base for that personalization can be provided by an AI web-browsing agent that crawls publicly available information.

Based on information found online about the target, they are invited to participate in a project that aligns with their interest and presented with a link to a site where they can find more details.

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~

But there is good news as well. We can use AI to fight AI: … LLMs are also getting better at recognizing phishing emails. Claude 3.5 Sonnet scored well above 90% with only a few false alarms and detected several emails that passed human detection. Although it struggles with some phishing emails that are clearly suspicious to most humans.

In addition, the blog article cited some hard evidence for year-over-year progress in AI capabilities: a year ago, unassisted AI was unable to match the phishing performance of human-generated phishing messages. But now, AI can match and even slightly exceed the effectiveness of human phishing. This is….progress, I guess.

P.S. I’d feel remiss if I did not remind us all yet again, it’s safest to never click on a link embedded in an email message, if you can avoid it. If the email purports to be from a company, it’s safest to go directly to the company’s website and do your business there.

Humans are struggling to understand LLM Progress

Ajeya Cotra writes the following in “Language models surprised us” (recommended, with more details on benchmarks)

In 2021, most people were systematically and severely underestimating progress in language models. After a big leap forward in 2022, it looks like ML experts improved in their predictions of benchmarks like MMLU and MATH — but many still failed to anticipate the qualitative milestones achieved by ChatGPT and then GPT-4, especially in reasoning and programming.

Joy’s thoughts: A possible reason for underestimating the rate of progress is not just a misunderstanding of the technology but a missed estimate on how much money would get poured in. When Americans want to buy progress, they can (see also SpaceX).

I compare this to the Manhattan project. People said it couldn’t be done, not because it was physically impossible but because it would be too expensive.

After a briefing regarding the Manhattan Project, Nobel Laureate Niels Bohr said to physicist Edward Teller, “I told you it couldn’t be done without turning the whole country into a factory.” (https://www.energy.gov/lm/articles/ohio-and-manhattan-project)

We are doing it again. We are turning the country into a factory for AI. Without all that investment, the progress wouldn’t be so fast.

Joy on AI in Higher Education

I was interviewed for an article “Navigating AI in Christian Higher Education“. Here’s an excerpt:

Rosenberg: What impact do you foresee in your field due to the increasing sophistication of AI, and what kind of skills do you think your students will need to be successful?

Buchanan: AI will reshape economic analysis and modeling, making complex data processing and predictive analytics more accessible. This will lead to more sophisticated economic forecasting and policy design. Economists will become more productive, and expectations will rise accordingly. While some fields might resist change, economics will be at the forefront of AI integration.

For students aiming to succeed, it’s crucial to embrace AI tools without relying on them excessively during college. Strong fundamentals in economic theory and critical thinking remain essential, coupled with data science and programming skills.

Interdisciplinary knowledge, especially in tech and social sciences, will be valuable. Adaptability and lifelong learning are key in this evolving field. Human skills like creativity, communication, and ethical reasoning will remain crucial.

While AI will alter economics, it will also present opportunities for those who can adapt and effectively combine economic thinking with technological proficiency.

Can researchers recruit human subjects online to take surveys anymore?

The experimental economics world is currently still doing data collection in traditional physical labs with human subjects who show up in person. This is still the gold standard, but it is expensive per observation. Many researchers, including myself, also do projects with subjects that are recruited online because the cost per observation is much lower.

As I remember it, the first platform that got widely used was Mechanical Turk. Prior to 2022, the attitude toward MTurk changed. It became known in the behavioral research community that MTurk had too many bots and bad actors. MTurk had not been designed for researchers, so maybe it’s not surprising that it did not serve our purposes.

The Prolific platform has had a good reputation for a few years. You have to pay to use Prolific but the cost per observation is still much lower than what it costs to use a traditional physical laboratory or to pay Americans to show up for an appointment. Prolific is especially attractive if the experiment is short and does not require a long span of attention from human subjects.

Here is a new paper on whether supposedly human subjects are going to be reliably human in the future: Detecting the corruption of online questionnaires by artificial intelligence   

Continue reading

Literature Review is a Difficult Intellectual Task

As I was reading through What is Real?, it occurred to me that I’d like a review on an issue. I thought, “Experimental physics is like experimental economics. You can sometimes predict what groups or “markets” will do. However, it’s hard to predict exactly what an individual human will do.” I would like to know who has written a little article on this topic.

I decided to feed the following prompt into several LLMs: “What economist has written about the following issue: Economics is like physics in the sense that predictions about large groups are easier to make than predictions about the smallest, atomic if you will, components of the whole.”

First, ChatGPT (free version) (I think I’m at “GPT-4o mini (July 18, 2024)”):

I get the sense from my experience that ChatGPT often references Keynes. Based on my research, I think that’s because there are a lot of mentions of Keynes books in the model training data. (See “”ChatGPT Hallucinates Nonexistent Citations: Evidence from Economics“) 

Next, I asked ChatGPT, “What is the best article for me to read to learn more?” It gave me 5 items. Item 2 was “Foundations of Economic Analysis” by Paul Samuelson, which likely would be helpful but it’s from 1947. I’d like something more recent to address the rise of empirical and experimental economics.

Item 5 was: “”Physics Envy in Economics” (various authors): You can search for articles or papers on this topic, which often discuss the parallels between economic modeling and physics.” Interestingly, ChatGPT is telling me to Google my question. That’s not bad advice, but I find it funny given the new competition between LLMs and “classic” search engines.

When I pressed it further for a current article, ChatGPT gave me a link to an NBER paper that was not very relevant. I could have tried harder to refine my prompts, but I was not immediately impressed. It seems like ChatGPT had a heavy bias toward starting with famous books and papers as opposed to finding something for me to read that would answer my specific question.

I gave Claude (paid) a try. Claude recommended, “If you’re interested in exploring this idea further, you might want to look into Hayek’s works, particularly “The Use of Knowledge in Society” (1945) and “The Pretense of Knowledge” (1974), his Nobel Prize lecture.” Again, I might have been able to get a better response if I kept refining my prompt, but Claude also seemed to initially respond by tossing out famous old books.

Continue reading

Writing with ChatGPT Buchanan Seminar on YouTube

I was pleased to be a (virtual) guest speaker for Plateau State University in Nigeria. My host was (Emergent Ventures winner) Nnaemeka Emmanuel Nnadi. The talk is up on Youtube with the following timestamp breakdown:

During the first ten minutes of the video, Ashen Ruth Musa gives an overview called “The Bace People: Location, Culture, Tourist Attraction.”

Then I introduce LLMs and my topic.

Minute 19:00 – 29:00 is a presentation of the paper “ChatGPT Hallucinates Nonexistent Citations: Evidence from Economics

Minute 23:30 – 34 is summary of my paper “Do People Trust Humans More Than ChatGPT?

Continue reading

Human Capital is Technologically Contingent

The seminal paper in the theory of human capital by Paul Romer. In it, he recognizes different types of human capital such as physical skills, educational skills, work experience, etc. Subsequent macro papers in the literature often just clumped together some measures of human capital as if it was a single substance. There were a lot of cross-country RGDP per capita comparison papers that included determinants like ‘years of schooling’, ‘IQ’, and the like.

But more recent papers have been more detailed. For example, the average biological difference between men and women concerning brawn has been shown to be a determinant of occupational choice. If we believe that comparative advantage is true, then occupational sorting by human capital is the theoretical outcome. That’s exactly what we see in the data.

Similarly, my own forthcoming paper on the 19th century US deaf population illustrates that people who had less sensitive or absent ability to hear engaged in fewer management and commercial occupations, or were less commonly in industries that required strong verbal skills (on average).

Clearly, there are different types of human capital and they matter differently for different jobs. Technology also changes what skills are necessary to boot. This post shares some thoughts about how to think about human capital and technology. The easiest way to illustrate the points is with a simplified example.

Continue reading

Sources on AI use of Information

  1. Consent in Crisis: The Rapid Decline of the AI Data Commons

Abstract: General-purpose artificial intelligence (AI) systems are built on massive swathes of public web data, assembled into corpora such as C4, Refined Web, and Dolma. To our knowledge, we conduct the first, large-scale, longitudinal audit of the consent protocols for the web domains underlying AI training corpora. Our audit of 14, 000 web domains provides an expansive view of crawlable web data and how consent preferences to use it are changing over time. We observe a proliferation of AI specific clauses to limit use, acute differences in restrictions on AI developers, as well as general inconsistencies between websites’ expressed intentions in their Terms of Service and their robots.txt. We diagnose these as symptoms of ineffective web protocols, not designed to cope with the widespread re-purposing of the internet for AI. Our longitudinal analyses show that in a single year (2023-2024) there has been a rapid crescendo of data restrictions from web sources, rendering ~5%+ of all tokens in C4, or 28%+ of the most actively maintained, critical sources in C4, fully restricted from use. For Terms of Service crawling restrictions, a full 45% of C4 is now restricted. If respected or enforced, these restrictions are rapidly biasing the diversity, freshness, and scaling laws for general-purpose AI systems. We hope to illustrate the emerging crisis in data consent, foreclosing much of the open web, not only for commercial AI, but non-commercial AI and academic purposes.

AI is taking out of a commons information that was provisioned under a different set of rules and technology. See discussion on Y Combinator 

2. “ChatGPT-maker braces for fight with New York Times and authors on ‘fair use’ of copyrighted works” (AP, January ’24)

3. Partly handy as a collection of references: “HOW GENERATIVE AI TURNS COPYRIGHT UPSIDE DOWN” by a law professor. “While courts are litigating many copyright issues involving generative AI, from who owns AI-generated works to the fair use of training to infringement by AI outputs, the most fundamental changes generative AI will bring to copyright law don’t fit in any of those categories…” 

4. New gated NBER paper by Josh Gans “examines this issue from an economics perspective”

Joy: AI companies have money. Could we be headed toward a world where OpenAI has some paid writers on staff? Replenishing the commons is relatively cheap if done strategically, in relation to the money being raised for AI companies. Jeff Bezos bought the Washington Post. It cost a fraction of his tech fortune (about $250 million). Elon Musk bought Twitter. Sam Altman is rich enough to help keep the NYT churning out articles. Because there are several competing commercial models, however, the owners of LLM products face a commons problem. If Altman pays the NYT to keep operating, then Anthropic gets the benefit, too. Arguably, good writing is already under-provisioned, even aside from LLMs.

GLIF Social Media Memes

Wojak Meme Generator from Glif will build you a funny meme from a short phrase or single word prompt. Note that it is built to be derogatory, cruel for sport, and may hallucinate up falsehoods. (see tweet announcement)

I am fascinated by this from the angle of modern anthropology. The AI has learned all of this by studying what we write online. Someone can build an AI to make jokes and call out hypocrisy.

Here are GLIFs of the different social media user stereotypes as of 2024. Most of our current readers probably don’t need any captions to these memes, but I’ll provide a bit of sincere explanation to help everyone understand the jokes.

Twitter user: Person who posts short messages and follows others on the microblogging platform.

Facebook user: Individual with a profile on the social network for connecting with friends and sharing content.

Bluesky user: Early adopter of a decentralized social media platform focused on user control.

Continue reading

Is the Universe Legible to Intelligence?

I borrowed the following from the posted transcript. Bold emphasis added by me. This starts at about minute 36 of the podcast “Tyler Cowen – Hayek, Keynes, & Smith on AI, Animal Spirits, Anarchy, & Growth” with Dwarkesh Patel from January 2024.

Patel: We are talking about GPT-5 level models. What do you think will happen with GPT-6, GPT-7? Do you still think of it like having a bunch of RAs (research assistants) or does it seem like a different thing at some point?

Cowen: I’m not sure what those numbers going up mean or what a GPT-7 would look like or how much smarter it could get. I think people make too many assumptions there. It could be the real advantages are integrating it into workflows by things that are not better GPTs at all. And once you get to GPT, say 5.5, I’m not sure you can just turn up the dial on smarts and have it, for example, integrate general relativity and quantum mechanics.

Patel: Why not?

Cowen: I don’t think that’s how intelligence works. And this is a Hayekian point. And some of these problems, there just may be no answer. Like maybe the universe isn’t that legible. And if it’s not that legible, the GPT-11 doesn’t really make sense as a creature or whatever.

Patel (37:43) : Isn’t there a Hayekian argument to be made that, listen, you can have billions of copies of these things. Imagine the sort of decentralized order that could result, the amount of decentralized tacit knowledge that billions of copies talking to each other could have. That in and of itself is an argument to be made about the whole thing as an emergent order will be much more powerful than we’re anticipating.

Cowen: Well, I think it will be highly productive. What tacit knowledge means with AIs, I don’t think we understand yet. Is it by definition all non-tacit or does the fact that how GPT-4 works is not legible to us or even its creators so much? Does that mean it’s possessing of tacit knowledge or is it not knowledge? None of those categories are well thought out …

It might be significant that LLMs are no longer legible to their human creators. More significantly, the universe might not be legible to intelligence, at least of the kind that is trained on human writing. I (Joy) gathered a few more notes for myself.

A co-EV-winner has commented on this at Don’t Worry About the Vase

(37:00) Tyler expresses skepticism that GPT-N can scale up its intelligence that far, that beyond 5.5 maybe integration with other systems matters more, and says ‘maybe the universe is not that legible.’ I essentially read this as Tyler engaging in superintelligence denialism, consistent with his idea that humans with very high intelligence are themselves overrated, and saying that there is no meaningful sense in which intelligence can much exceed generally smart human level other than perhaps literal clock speed.

I (Joy) took it more literally. I don’t see “superintelligence denialism.” I took it to mean that the universe is not legible to our brand of intelligence.

There is one other comment I found in response to a short clip posted by @DwarkeshPatel  by youtuber @trucid2

Intelligence isn’t sufficient to solve this problem, but isn’t for the reason he stated. We know that GR and QM are inconsistent–it’s in the math. But the universe has no trouble deciding how to behave. It is consistent. That means a consistent theory that combines both is possible. The reason intelligence alone isn’t enough is that we’re missing data. There may be an infinite number of ways to combine QM and GR. Which is the correct one? You need data for that.

I saved myself a little time by writing the following with ChatGPT. If the GPT got something wrong in here, I’m not qualified to notice:

Newtonian physics gave an impression of a predictable, clockwork universe, leading many to believe that deeper exploration with more powerful microscopes would reveal even greater predictability. Contrary to this expectation, the advent of quantum mechanics revealed a bizarre, unpredictable micro-world. The more we learned, the stranger and less intuitive the universe became. This shift highlighted the limits of classical physics and the necessity of new theories to explain the fundamental nature of reality.
General Relativity (GR) and Quantum Mechanics (QM) are inconsistent because they describe the universe in fundamentally different ways and are based on different underlying principles. GR, formulated by Einstein, describes gravity as the curvature of spacetime caused by mass and energy, providing a deterministic framework for understanding large-scale phenomena like the motion of planets and the structure of galaxies. In contrast, QM governs the behavior of particles at the smallest scales, where probabilities and wave-particle duality dominate, and uncertainty is intrinsic.

The inconsistencies arise because:

  1. Mathematical Frameworks: GR is a classical field theory expressed through smooth, continuous spacetime, while QM relies on discrete probabilities and quantized fields. Integrating the continuous nature of GR with the discrete, probabilistic framework of QM has proven mathematically challenging.
  2. Singularities and Infinities: When applied to extreme conditions like black holes or the Big Bang, GR predicts singularities where physical quantities become infinite, which QM cannot handle. Conversely, when trying to apply quantum principles to gravity, the calculations often lead to non-renormalizable infinities, meaning they cannot be easily tamed or made sense of.
  3. Scales and Forces: GR works exceptionally well on macroscopic scales and with strong gravitational fields, while QM accurately describes subatomic scales and the other three fundamental forces (electromagnetic, weak nuclear, and strong nuclear). Merging these scales and forces into a coherent theory that works universally remains an unresolved problem.

Ultimately, the inconsistency suggests that a more fundamental theory, potentially a theory of quantum gravity like string theory or loop quantum gravity, is needed to reconcile the two frameworks.

P.S. I published “AI Doesn’t Mimic God’s Intelligence” at The Gospel Coalition. For now, at least, there is some higher plane of knowledge that we humans are not on. Will AI get there? Take us there? We don’t know.