Long time, no blog! My day job and life has been keeping me busier than ever lately – which I’m so thankful and excited for – but I am glad to be back with a new post today.
Phishing has always been the low-hanging fruit of cybercrime – a carefully worded email, a fake login page, and an unsuspecting click. For years, one of the main defences has been user skepticism: “Would a CEO really send me an email full of typos asking for gift cards?” But in 2025, that old rule of thumb doesn’t hold up anymore. Thanks to artificial intelligence (AI), phishing emails are now clean, convincing, and alarmingly personal.
From Obvious Scams to Polished Professional
The days of poorly written scams are over. Attackers now have access to large language models (LLMs) that can draft flawless, context-aware emails in seconds. These tools mimic corporate tone, reference recent events, and even adapt to industry jargon. Instead of raising suspicion, phishing emails now blend seamlessly into inboxes.
Business Email Compromise Meets AI
Business Email Compromise (BEC) was already one of the costliest forms of cybercrime. With AI in the mix, attackers can generate entire threads of professional back-and-forth that look authentic. Imagine receiving a follow-up on a contract negotiation you’ve actually been working on – except the “colleague” is an attacker’s bot. That’s no longer science fiction.
Deepfakes Expand the Attack Surface
AI doesn’t stop at text. We’re now seeing the rise of voice cloning and video deepfakes used in phishing campaigns. If you’ve ever watched The Capture, you’ll know how unsettling it is to see video footage manipulated so convincingly that even trained investigators question reality. What was once dystopian TV plot is now inching into reality – attackers can fake a Zoom call with your CEO and use it to authorise payments or share sensitive data. In a word where “seeing is believing,” that belief is being weaponised.
Defending Against AI-Enhanced Phishing
The defence playbook needs upgrade:
- Technical Controls: Implement strong email authentication (DMARC, SPF, DKIM) and anomaly detection systems that can flag unusual communication patterns.
- Awareness Training: Employees need examples of what AI-enhanced phishing looks like – polished, credible, and urgent.
- Verification Protocols: Always confirm unusual requests via a second channel, whether that’s a phone call or in-person check.
AI has levelled up phishing, but organisation can still fight back by combining smarter technology with sharper human judgement. If your company hasn’t run a phishing simulation in the last 12 months, now is the time. The scams look different in 2025 – your defences should too.
Discover more from The Security Brief
Subscribe to get the latest posts sent to your email.
