From Data Lakes to Deepfakes: AI Security Trends at Black Hat USA 2024

Blackhat Stage

Photo credit Black Hat

Over 20,000 attendees braved the Las Vegas summer heat for Black Hat USA 2024 to learn new skills, hear from security researchers, and see the latest solutions from hundreds of cybersecurity vendors. For the second year in a row, AI was a major topic of conversation—in the keynote, sessions, and business hall. With another year of generative AI maturity, this year’s discussions focused on three major themes: protecting your AI usage, rising AI-powered threats, and using AI to augment security tools and staff.

Protecting Your AI Usage

More than one Black Hat speaker pointed out that as companies rush to integrate AI into every facet of their operations, some have overlooked the security risks involved. By granting AI systems access to vast data lakes, businesses inadvertently open themselves up to potential data exposure. The rise of AI developer services has made it easier than ever to train models and build AI-powered applications, but this convenience comes at a cost. Nation-state threat actors heavily invest in AI, create their own models, and engage in activities like poisoning datasets and manipulating algorithms to sabotage their adversaries.

Attackers are increasingly publishing AI models with embedded backdoors across the internet, hoping to gain access to corporate networks and data via unwitting employee downloads. Even when adopting vetted, well-known AI platforms, organizations are inviting risk by not hardening these systems that frequently connect back to the internet. Many AI platforms also lack adequate logging, API controls, or authentication measures necessary for secure usage.

To protect internal AI usage, organizations must adopt a proactive security approach. IBM’s session speaker recommended implementing red teaming exercises tailored to the specific threats your environment faces, educating security teams about AI-specific vulnerabilities by referring to resources like the OWASP Top 10 for Large Language Model Applications and OWASP Machine Learning Security Top 10 threats. In addition, traditional application security practices must be adhered to, as the apps and APIs used for AI are also vulnerable.

Rising AI-Powered Threats

According to Accenture security researchers, this futuristic emulation technology used for deepfakes has evolved significantly over the past two years, creating more believable attacks with a lower barrier to entry. From turnkey solutions to deepfake-as-a-service offerings, all an attacker needs is money and good source material to make a convincing deepfake.

Advances in the AI models that power deepfakes have lowered costs. Face swap technology that supports head movement is available for around $2,500. Only minutes of high-quality video or audio are needed to create a decent deepfake. Accenture presenters reported that executives are frequently targeted, and their recordings can be easily found online.

What are attackers doing with deepfakes? They’re commonly used as part of social engineering campaigns. Simple voice deepfakes have been used to get multi-factor authentication tokens from the deepfake victim’s coworkers to gain access to systems.

More sophisticated deepfakes are being used by nation-state actors to get IT or software engineering jobs that grant them access to intellectual property. Security vendor KnowBe4 shared just weeks ago that they were tricked into hiring a North Korean threat actor using the stolen identity of a U.S. citizen. CrowdStrike reported on the increase in this type of attack, sharing that threat actors receive company laptops and credentials, allowing them to exfiltrate data using remote monitoring and management software.

In addition, multiple security vendor speakers mentioned seeing an increase in identity-based attacks like credential stuffing, which may or may not be assisted by AI reconnaissance.

Augmenting Security Tools with AI

While security vendors have cited AI and machine learning as part of their threat detection capabilities for years, generative AI is now taking center stage. From suggesting fixes to vulnerable code and chatbots that help security analysts be more effective, generative AI was on display at many Black Hat booths.

Better defenses and rapid threat response are the primary goals of AI augmentation. Real-time asset detection makes sure nothing goes unprotected. AI pattern detection can quickly identify zero-day threats or increasingly convincing phishing emails to stop attacks before they inflict damage. AI chatbots that support natural language queries make it easier for security analysts to make the most of security tools that can sometimes be complicated to operate, helping them work faster.

Expect all three facets of AI to continue to evolve, especially as attackers find new ways to wield and exploit them. In the meantime, education is a powerful defense. Teach your employees and customers how to spot social engineering attempts, use AI tools responsibly, and be mindful of what they share online.

While Black Hat USA 2025 dates haven’t yet been announced, you can find other upcoming events on their website.

  1. 1. EY, The European Union Artificial Intelligence Act, Feb. 2024