Study Shows AI Can Enable Information-Stealing (Phishing) Campaigns

As a computer user, I make a modest effort to stay informed regarding the latest maneuvers by the bad guys to steal information and money. I am on a mailing list for the Malwarebytes blog, which publishes maybe three or four stories a week in this arena.

Here are three stories from the latest Malwarebytes email:

 ( 1 )   AI-supported spear phishing fools more than 50% of targets A controlled study reveals that 54% of users were tricked by AI-supported spear phishing emails, compared to just 12% who were targeted by traditional, human-crafted ones. ( 2 )  Dental group lied through teeth about data breach, fined $350,000 Westend Dental denied a 2020 ransomware attack and associated data breach, telling its customers that their data was lost due to an “accidentally formatted hard drive”. The company agreed to pay $350,000 to settle HIPAA violations ( 3 ) “Can you try a game I made?” Fake game sites lead to information stealers Victims lured to a fake game website where they were met with an information stealer instead of the promised game.

The first item here fits with our interest in the promise and perils of AI, so I will paste a couple of self-explanatory excerpts in italics:

One of the first things everyone predicted when artificial intelligence (AI) became more commonplace was that it would assist cybercriminals in making their phishing campaigns more effective.

Now, researchers have conducted a scientific study into the effectiveness of AI supported spear phishing, and the results line up with everyone’s expectations: AI is making it easier to do crimes.

The study, titled Evaluating Large Language Models’ Capability to Launch Fully Automated Spear Phishing Campaigns: Validated on Human Subjects, evaluates the capability of large language models (LLMs) to conduct personalized phishing attacks and compares their performance with human experts and AI models from last year.

To this end the researchers developed and tested an AI-powered tool to automate spear phishing campaigns. They used AI agents based on GPT-4o and Claude 3.5 Sonnet to search the web for available information on a target and use this for highly personalized phishing messages.

With these tools, the researchers achieved a click-through rate (CTR) that marketing departments can only dream of, at 54%. The control group received arbitrary phishing emails and achieved a CTR of 12% (roughly 1 in 8 people clicked the link).

Another group was tested against an email generated by human experts which proved to be just as effective as the fully AI automated emails and got a 54% CTR. But the human experts did this at 30 times the cost of the AI automated tools.

…The key to the success of a phishing email is the level of personalization that can be achieved by the AI assisted method and the base for that personalization can be provided by an AI web-browsing agent that crawls publicly available information.

Based on information found online about the target, they are invited to participate in a project that aligns with their interest and presented with a link to a site where they can find more details.

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~

But there is good news as well. We can use AI to fight AI: … LLMs are also getting better at recognizing phishing emails. Claude 3.5 Sonnet scored well above 90% with only a few false alarms and detected several emails that passed human detection. Although it struggles with some phishing emails that are clearly suspicious to most humans.

In addition, the blog article cited some hard evidence for year-over-year progress in AI capabilities: a year ago, unassisted AI was unable to match the phishing performance of human-generated phishing messages. But now, AI can match and even slightly exceed the effectiveness of human phishing. This is….progress, I guess.

P.S. I’d feel remiss if I did not remind us all yet again, it’s safest to never click on a link embedded in an email message, if you can avoid it. If the email purports to be from a company, it’s safest to go directly to the company’s website and do your business there.

Malwarebytes Poll: Public Fascination with ChapGPT Has Turned to Suspicion

ChatGPT and related AI have been all the rage these past few months. Among other things, “AI” became the shiny object that companies have dangled before investors, rocketing upward the shares of the “Magnificent Seven” large tech stocks.

However, a recent poll by computer security firm Malwarebytes notes a marked turn in the public’s attitude towards these products:

It seems the lustre of the chatbot-that’s-going-to-change-everything is starting to fade….

When people explored its capabilities in the days and weeks after its launch, it seemed almost miraculous—a wonder tool that could do everything from creating computer programs and replacing search engines, to writing students’ essays and penning punk rock songs. Its release kick-started a race to disrupt everything with AI, and integrate ChatGPT-like interfaces into every conceivable tech product.

But those that know the hype cycle know that the Peak of Inflated Expectations is quickly followed by the Trough of Disillusionment. Predictably, ChatGPT’s rapid ascent was met by an equally rapid backlash as its shortcomings became apparent….

A new survey by Malwarebytes exposes deep reservations about ChatGPT, with optimism in startlingly short supply. Of the respondents familiar with ChatGPT:

  • 81% were concerned about possible security and safety risks.
  • 63% don’t trust the information it produces.
  • 51% would like to see work on it paused so regulations can catch up.

The concerns expressed in the survey mirror the trajectory of the news about ChatGPT since its introduction in November 2022.

As EWED’s own Joy Buchanan has been pointing out specifically with regard to citations for research papers (here, here, and in the Wall Street Journal), ChatGPT tends to “hallucinate”, i.e., to report things that are not simply true. In the recent working paper “GPT-3.5 Hallucinates Nonexistent Citations: Evidence from Economics” , she warns of the possibility of a vicious spiral of burgeoning falsehoods, where AI-generated errors which are introduced into internet content such as research papers are then picked up as “learning” input into the next generation of AI training.

Real-world consequences of ChatGPT’s hallucinations are starting to crop up. A lawyer has found himself in deep trouble after filing an error-ridden submission in an active court case. Evidently his assistant, also an attorney, relied on ChatGPT which came up with a raft of “citations to non-existent cases.” Oops.

And now we have what is believed to be “the first defamation lawsuit against artificial intelligence.” Talk show host Mark Walters filed a complaint in Georgia which is:

…asking for a jury trial to assess damages after ChatGPT produced a false complaint to a journalist about the radio host.  The faux lawsuit claimed that Mr. Walters, the CEO of CCW Broadcast Media, worked for a gun rights group as treasurer and embezzled funds.

…Legal scholars have split on whether the bots should be sued for defamation or under product liability, given it’s a machine — not a person — spreading the false, hurtful information about people.

The issue arose when an Australian mayor threatened to sue the AI company this year over providing false news reports that he was guilty of a foreign bribery scandal.

Wow.

Thousands of AI experts and others have signed an open letter asking: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”.    The letter states that “Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”    It therefore urges  “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4…If such a pause cannot be enacted quickly, governments should step in and institute a moratorium…AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

Oh, the irony. There is reason to believe that our Stone Age ancestors were creating unreal images like this:

whilst stoned on peyote, shrooms, or oxygen deprivation. And here we are in 2023, with the cutting edge in information technology, running on the fastest specially-fabricated computing devices, and we get…hallucinations.