AI Can Help Catch a Bigger Phish: How Attackers Are Winning with AI

Em Blog Ai Bigger Phising Main Image

Each year, industry leaders work to uncover trends and insights about cybercrime. The latest crop of reports shows that the human element remains a top issue in data breaches.

What’s different in 2024 is that large language models (LLMs) have transformed how people work, enhancing our ability to brainstorm effectively and quickly, automate routine tasks, analyze data, and much more. And, like any powerful tool, they can be exploited. Cybercriminals increasingly harness artificial intelligence (AI) to commit crimes more efficiently. AI-driven attacks, particularly those involving phishing and social engineering, are becoming more advanced and challenging to identify.

Read on to learn about today’s top cybersecurity attack vectors, how attackers use AI to their advantage, and what steps organizations can take to defend against these threats.

Hooked by Mistakes:
The Human Element

In cybersecurity, the human element refers to individuals’ role in incidents and breaches. Whether intentional or accidental, human actions can significantly impact an organization’s security.

The 2024 Data Breach Investigations Report (DBIR) by Verizon highlights the persistent role people play in cybercrime. The report, summarizing the findings of 30,458 security incidents, out of which 10,626 were verified breaches in 2023, revealed that over two-thirds (68%) involved a human element—insider errors or social engineering schemes.1

Other recent data correlates with increased identity attacks (phishing, social engineering, and access brokers). According to Zscaler Phishing Report, phishing attacks surged 58.2% in 2023 compared to the previous year.2 What’s more, voice phishing (vishing) and deepfake phishing attacks are increasing as attackers leverage generative AI tools to enhance their social engineering strategies.2 According to the FBI, coming in only second to investment scams, business email compromise (BEC) crime accounted for $2.9 billion in losses in 2023.3

There are no signs of criminals letting up, with deceptive links being the most common tactic used in malicious emails and phishing the cause of 9 out of 10 successful cyberattacks.4 Even seasoned professionals are at risk. In our own organization, one person received an email from an employee requesting a change in their direct deposit account. It was a socially engineered phishing scam that included very specific information. Fortunately, the recipient responded correctly, contacting the alleged sender via another verified channel to discover it was a fraudulent request.

Casting a Wider Net:
The Dual Nature of AI in Cybersecurity

Like many technologies that help people solve challenges, LLMs are being used maliciously by criminals, as well. AI is increasingly being used for reconnaissance, providing the background information about a target that attackers need to craft a convincing phishing email. LLMs are then used to mimic human writing and personalize messages, increasing the likelihood of success. They also enable attackers to expand their reach by generating phishing messages in other languages.

Some dark LLMs include:

  • WormGPT: Based on GPT-J, it can generate malware, evade security software, and create fake content like fraudulent invoices.
  • FraudGPT: It can create phishing emails and social engineering scripts to trick victims into revealing sensitive information.
  • DarkGPT: It can conduct reconnaissance, identify vulnerabilities, and automate complex attack sequences.

Reeling in Protection:
Defenses Against AI Phishing

  1. Education and awareness: Regular training sessions help employees recognize phishing attempts and understand social engineering tactics, building a resilient first line of defense.
  2. Robust email security: Deploy AI and machine learning-based email security solutions to identify and block phishing attempts before they reach employees’ inboxes. To detect potential threats, these solutions analyze email content, sender reputation, and user behavior.
  3. Zero-trust security: Implement a zero-trust model to ensure that all users, whether inside or outside the organization, are continuously validated before gaining access to systems and data. This approach minimizes the risk of unauthorized access due to stolen credentials and protects sensitive information.
  4. Regular assessments and updates: Organizations should regularly assess their security posture, update defenses based on the latest threat intelligence, and ensure all systems are patched and up to date.

As AI continues to advance, so will cyber attackers’ tactics. Organizations can stay one step ahead by understanding these threats and taking proactive measures. Investing in education, leveraging AI for defense, and adopting a zero-trust approach are critical steps in safeguarding against the growing menace of AI-driven cyberattacks. Even the best defenses require constant vigilance and adaptation to keep pace with innovation.

  1. Verizon Business, 2024 Data Breach Investigations Report, 2024
  2. Zscaler, ThreatLabz 2024 Phishing Report
  3. Federal Bureau of Investigation, Internet Crime Report, 2023
  4. Cloudflare, Cloudflare Security Brief, 2024