How Well Can You Identify the Many Masks of Social Engineering?

Em Blog Social Engineering Main Image

Cybersecurity month coincides with Halloween, but that’s no joke. Cybercriminals are refining their techniques, devising more sophisticated tricks to get at your treats—access to your systems and data—and trash your metaphorical front yard by harming your business reputation.

The weakest link in any cybersecurity perimeter is the human element. The best AI-enabled defenses can only go so far, such as stopping malware from executing when a well-intentioned team member clicks on a compromised link in a phishing email. However, that’s the problem—malware is becoming less of a concern.

In 2024, 79% of detections involved malware-free attacks, in which cybercriminals played a hands-on role in infiltrating systems.1 They’re using social engineering techniques like costumes to increase their credibility and trick privileged users into opening the door for them. Which begs the question: do you know how to identify these types of attacks?

A Phish by Any Other Name Is Just as Slimy

Knowledge workers at any level should already be familiar with basic social engineering tactics. Phishing is the practice of sending fraudulent emails to get victims to click on a compromised link. Then there’s spear phishing, which targets specific individuals using content that’s personalized to them, designed to create a false sense of trust. A recent report highlighted that spear phishing accounted for less than 0.1% of attacks but led to 66% of successful breaches.2

But generative AI (GenAI) is changing the game and not for the better. Imagine if your typical trick-or-treater wasn’t just donning a store-bought mask but had an entire professional makeup and prosthetics team at their disposal. When you answer the door and you’re greeted not by a costumed neighbor asking for candy, but someone who looks and sounds convincingly like your co-worker, you’re more inclined to invite them inside. That’s the new reality made possible by GenAI.

For example, attackers who engage in voice phishing or vishing (which is just the same phishing tactic used over a phone call instead of an email) are now using GenAI to leave legitimate-sounding voice messages on their victim’s phones. They can also use GenAI to impersonate whoever they want on a voice or video call in real time. The rate of vishing attacks increased by 442% between the first half and second half of 2024.2

The Call Is Coming from Inside the House

A perhaps lesser-known tactic called business email compromise (BEC) occurs when attackers impersonate a high-level executive over email and attempt to convince team members to transfer money or sensitive data. As you might imagine, GenAI is taking this tactic to a whole new level. In 2024, a group of attackers posing as high-ranking execs at a multinational firm jumped on a video call with a finance worker and convinced him to wire out US$25.6M.3 The attackers used live deepfake recreations to impersonate everyone on the call.

It’s not just the bigwigs getting the doppelganger treatment. In early 2025, CrowdStrike reported on the DPRK-originated Famous Chollima threat, in which attackers use GenAI to create fake LinkedIn profiles of job applicants for remote IT positions at various companies all over the globe.4 They even use real-time deepfakes during job interviews to secure these positions and then funnel money and access to the North Korean regime.

But wait, there’s more. Another emerging tactic is called help desk social engineering. Threat actors call an organization’s IT desk and pose as legitimate employees, usually asking for help to reset a password or whatever multifactor authentication (MFA) mechanisms the company is using. Once those are reset, the attackers can set up their own backdoors and apply email filters to the victim’s inbox, so they can continue to actively use it while the victim is unaware anything is happening. Now your trick-or-treater is camped out in your basement without your knowledge, dipping into your pantry and letting others in the back door, and all you notice are missing leftovers and a slight uptick in the monthly utility bill.

What’s Blood Got to Do with It?

This goes beyond getting savvy about not clicking links in suspicious emails. How do you know who you’re talking to is real, on the phone or a video call? Not only does everyone need to be persistently alert and a pro at identifying fraudulent communication, they also need more advanced tools and safeguards to help spot what the naked eye can’t see.

Deepfake detection is evolving technology. Cybersecurity companies are developing AI that actively scans media for unnatural speech or visual glitches, like mismatched lip syncing, to flag potential deepfakes. The attention to detail gets deeply granular and a little macabre. For example, the semiconductor manufacturer Intel has been working on techniques to evaluate video footage by measuring the rate of blood flow under a person’s facial skin.5 But this process was somewhat ineffective when applied to real footage, resulting in several false positives.

Self-Defense Against Social Engineering

Unfortunately, until the technology gets better, we lack foolproof mechanisms to keep from getting fooled. The best defense is several defenses layered together: awareness and knowledge of tactics used by threat actors, a cultivated suspicion for every form of communication, and advanced AI tools that give you a sixth sense for everything you can’t see or hear in a deepfake. Organizations can start ramping up their countermeasures and employee trainings with these pillars in mind. Or you can get really good at reading the blood circulation in your boss’s cheek muscles.

  1. CrowdStrike, CrowdStrike 2025 Global Threat Report, March 2025.
  2. Barracuda, 2023 spear-phishing trends, May 2023.
  3. CNN, Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’, February 2024.
  4. CrowdStrike, CrowdStrike 2025 Global Threat Report, March 2025.
  5. BBC News, Inside the system using blood flow to detect deepfake video, July 2023.