How AI Is Changing the Phishing Threat Landscape for Australian Businesses


Phishing has been around since the mid-1990s. For most of that history, it’s been relatively crude — mass-sent emails with obvious grammatical errors, generic greetings, and suspicious links that anyone paying attention could spot. Security awareness training could reasonably focus on teaching people to look for those red flags.

That era is over. AI-generated phishing is a fundamentally different threat, and Australian businesses need to adjust their defences accordingly.

What AI Phishing Actually Looks Like

The old stereotype of phishing — a Nigerian prince email riddled with spelling mistakes — has been replaced by something far more dangerous. Large language models can produce polished, contextually appropriate messages that are virtually indistinguishable from legitimate business communication.

More concerning is the personalisation. AI tools can scrape publicly available information about a target — LinkedIn profiles, company websites, social media posts, published articles — and craft messages that reference real projects, real colleagues, and real events. A phishing email that mentions your actual client by name, references a project you posted about on LinkedIn last week, and appears to come from a colleague’s email address is extraordinarily difficult to detect.

The Australian Cyber Security Centre (ACSC) reported a 67% increase in reported phishing incidents targeting Australian businesses in 2025, with a notable shift toward highly targeted, AI-enhanced attacks on medium-sized businesses — companies large enough to have valuable data but often too small to have dedicated security teams.

Spear Phishing at Scale

The traditional distinction between phishing (mass attacks) and spear phishing (targeted attacks) is dissolving. Previously, crafting a convincing spear phishing email required significant manual research and effort. It was typically reserved for high-value targets — executives, finance teams, system administrators.

AI has collapsed the cost of personalisation. What used to take an attacker hours of research and careful email crafting now takes minutes. The result is spear phishing at scale — highly personalised attacks sent to dozens or hundreds of employees simultaneously, each email tailored to the individual recipient.

For Australian SMBs, this changes the threat model entirely. You no longer need to be specifically targeted to receive a sophisticated phishing attempt. The economics of attack have shifted enough that your accounts payable clerk is a viable target for a personalised attack.

Voice and Video Phishing

Text-based phishing is just the beginning. AI-generated voice (vishing) and video (deepfake) attacks represent the next wave.

Voice cloning technology has reached the point where a few seconds of sample audio — easily obtained from a conference presentation, podcast appearance, or corporate video — can generate convincing voice replicas. Attackers are using these to impersonate executives in phone calls requesting urgent wire transfers.

Several Australian businesses have already been victimised by AI voice phishing. The attacks typically follow a pattern: a finance team member receives a call that sounds exactly like the CEO, requesting an urgent payment to a new supplier. The urgency, the familiar voice, and the authority of the supposed caller combine to override normal verification procedures.

Organisations working with AI implementation help have begun deploying AI-based detection tools that analyse incoming communications for synthetic voice markers and anomalous patterns — essentially using AI to fight AI. It’s an arms race, and staying ahead requires continuous investment.

Why Traditional Defences Aren’t Enough

Email filtering, while still essential, catches a decreasing percentage of AI-generated phishing. Traditional filters rely on known indicators — blacklisted domains, suspicious attachments, keyword patterns, sender reputation scores. AI-generated phishing deliberately avoids these markers.

The emails come from compromised legitimate accounts or convincing lookalike domains. They don’t contain malware attachments — instead, they direct recipients to credential harvesting pages that mirror legitimate login portals. The language is clean and professional. The standard red flags simply aren’t there.

Security awareness training faces a similar problem. Teaching people to look for grammatical errors, generic greetings, and sense of urgency was effective against old-style phishing. Against AI-generated attacks that use perfect grammar, personal details, and plausible scenarios, the traditional training markers fail.

What Actually Works

Defending against AI-enhanced phishing requires a layered approach that doesn’t rely primarily on human detection.

Multi-factor authentication (MFA) remains the single most effective defence. Even if an employee clicks a phishing link and enters their credentials, MFA prevents the attacker from accessing the account. Hardware security keys (like YubiKeys) are significantly more resistant to phishing than SMS or app-based codes.

Email authentication protocols — SPF, DKIM, and DMARC — help prevent domain spoofing. They don’t stop all phishing, but they make it harder for attackers to impersonate your domain when targeting your clients, partners, or employees.

Out-of-band verification for financial transactions. Any request involving money — wire transfers, invoice payments, changes to payment details — should be verified through a separate communication channel. If you receive an email requesting a payment, call the requester on a known number (not one provided in the email) to confirm.

AI-powered email security tools like Abnormal Security or Proofpoint’s adaptive analysis use machine learning to detect anomalous communication patterns rather than relying on known threat signatures. They analyse writing style, sender behaviour, and request patterns to flag suspicious messages.

The Australian Regulatory Context

Australian businesses have obligations under the Privacy Act 1988 to protect personal information, and the Notifiable Data Breaches scheme requires reporting breaches that are likely to cause serious harm. A successful phishing attack that exposes customer data triggers these obligations.

The ACSC’s Essential Eight mitigation strategies provide a practical framework for baseline cyber security. MFA, application patching, and restricting administrative privileges — three of the Essential Eight — directly reduce phishing risk.

For businesses in regulated industries — financial services, healthcare, critical infrastructure — additional regulatory requirements may apply. The Australian Prudential Regulation Authority (APRA) and the Australian Securities and Investments Commission (ASIC) have both issued guidance on cyber resilience expectations.

The Uncomfortable Truth

No combination of technology and training will reduce phishing risk to zero. AI is making attacks better faster than defences can keep up, and the economics of cybercrime ensure that attackers are well-resourced and motivated.

The goal isn’t perfection. It’s making your business hard enough to compromise that attackers move on to easier targets. In cybersecurity, being harder to attack than the business next door is often the most practical form of protection.

Invest in the basics. MFA everywhere. Email authentication. Verification procedures for financial transactions. Regular training that reflects the actual threat landscape, not the one from five years ago. These aren’t exciting measures, but they’re the ones that actually reduce risk.