AI-Powered Social Engineering Is Here, and Most People Aren't Ready


In February 2026, a finance director at a multinational firm in Hong Kong transferred $25 million to fraudsters after a video call with what appeared to be the company’s CFO and several colleagues. Every person on that call was a deepfake. The attackers had used publicly available video footage to create real-time AI-generated avatars that were convincing enough to pass a live conversation.

This isn’t a hypothetical scenario from a security conference talk. It happened. And the technology to pull it off is becoming cheaper and more accessible every month.

How AI Has Changed Social Engineering

Traditional social engineering relied on human skill — a persuasive voice, a well-crafted email, understanding of psychology. The attacker had to be good at pretending to be someone they weren’t. That limited the scale and sophistication of attacks.

AI removes those limitations in several ways:

Deepfake Voice Cloning

Modern voice cloning requires as little as 3-5 seconds of source audio to create a convincing replica. That audio is readily available for most people — voicemail greetings, social media videos, podcast appearances, conference talks. Once cloned, the voice can be used in real-time phone calls that sound virtually identical to the target.

The implications for vishing (voice phishing) are severe. “Hi, this is David from the head office” hits differently when it actually sounds like David.

AI-Generated Phishing at Scale

Large language models can generate personalised phishing emails that are grammatically perfect, contextually relevant, and tailored to individual recipients. Gone are the days when phishing emails were identifiable by broken English and generic greetings.

An attacker can scrape a target’s LinkedIn profile, recent social media posts, and published work, feed that context to an LLM, and generate a highly targeted spear-phishing email in seconds. Research from IBM’s X-Force found that AI-generated phishing emails had a 60% higher click rate than human-crafted ones in controlled studies.

Automated OSINT

Open-source intelligence gathering — collecting publicly available information about targets — used to be a manual, time-consuming process. AI agents can now automate it, scraping and correlating data across social media platforms, company websites, public records, and data breach databases to build comprehensive target profiles in minutes.

This means attackers can scale reconnaissance across thousands of potential targets simultaneously, identifying the most vulnerable and most valuable ones for focused attacks.

What This Means for Personal Privacy

The intersection of AI social engineering and personal data exposure is where the real risk lies. Every piece of information you’ve made publicly available is now potential ammunition for AI-powered attacks:

  • Your voice from any public recording
  • Your face from social media photos and videos
  • Your writing style from blog posts, social media, and professional publications
  • Your relationships from tagged photos, public interactions, and organisational memberships
  • Your schedule from public calendars, event registrations, and travel posts

None of this information was particularly dangerous five years ago. In the age of AI-powered social engineering, all of it becomes attack surface.

Defending Yourself

Verify Through Independent Channels

The most effective defence against AI social engineering is simple: verify requests through a channel the attacker doesn’t control. If you receive an unexpected call from your “CEO” asking for a wire transfer, hang up and call them back on a number you look up independently. If you get an email from a colleague requesting sensitive data, walk to their desk or send a separate message through a different platform.

This breaks the attacker’s control of the communication channel and is effective regardless of how sophisticated the deepfake technology becomes.

Establish Verification Protocols

For businesses, this means creating formal verification procedures for sensitive requests:

  • Verbal code words that change periodically for high-value transactions
  • Dual authorisation for financial transfers above a threshold
  • Callback verification through pre-established phone numbers for any unusual requests
  • Video call scepticism — if someone requests an unusual action on a video call, verify through a separate channel

The organisations working on AI security frameworks, including specialists in this space, are increasingly recommending that businesses treat any single communication channel as potentially compromised and build verification redundancy into their processes.

Reduce Your Attack Surface

  • Audit your public information. Search for yourself online and consider what an attacker could piece together. Remove information that doesn’t need to be public.
  • Limit voice and video exposure. This isn’t practical for everyone, but be aware that every public recording of your voice is a potential cloning source.
  • Use privacy settings aggressively. Social media platforms have granular privacy controls. Use them. Default to friends-only for personal posts.
  • Be cautious with social media quizzes and surveys. “What was your first car?” and “What street did you grow up on?” are common security questions, and people happily post the answers.

Technical Measures

  • Email authentication (SPF, DKIM, DMARC) on your domain prevents email spoofing
  • Hardware security keys for critical accounts (as discussed in our previous guide) prevent credential phishing
  • Zero-trust networking assumes every access request is potentially compromised
  • AI-powered email filtering that can detect AI-generated content and anomalous communication patterns

The Arms Race

Here’s the uncomfortable truth: this is an arms race, and the attackers currently have the advantage. Creating a deepfake is easier than detecting one. Generating a personalised phishing email is easier than reliably filtering one. Automating reconnaissance is easier than protecting against it.

The gap will narrow as detection technology improves, but for the foreseeable future, the best defence is human behaviour: slow down, verify independently, and treat unexpected requests with healthy scepticism regardless of how legitimate they appear.

The technology has changed. The fundamental defence hasn’t: if something feels off, trust that instinct and verify before acting.