AI at the Gate: How Attackers & Defenders Are Using AI in 2025

AI has moved from hype to daily reality within security operations, and attackers are adopting it just as quick as – or in some cases faster than – defenders. For security leaders, the question is no longer “if” AI matters, but how to harness it without introducing new risks.

How Attackers are Using AI

Attackers are already using AI models to generate highly convincing phishing content, deepfake audio and videos, and tailored lures that bypass traditional awareness training. These tools lower the barrier to entry, enabling less skilled actors to run campaigns that previously required native-language skills and serious time investment.

GenAI also helps adversaries accelerate reconnaissance and exploit development. Large language and code models can summarise documentation, search for common misconfigurations and even suggest ways to chain vulnerabilities together, shortening gaps between disclosure and exploitation.

How Defenders Are Fighting Back

On the defensive side, AI is increasingly embedded in Endpoint Detection & Response (EDR), Security Information & Events Management (SIEM), and Network Detection & Response (NDR) platforms to sift through huge volumes of telemetry and surface anomalies that humans would miss. Used well, this gives Security Operations Centres (SOCs) earlier indications of compromise and allows analysts to spend more time investigating and less time trawling logs.

AI is also being used to automate repetitive tasks, from triaging low-severity alerts to enriching indicators with context from threat intelligence feeds. That automation is essential as attack volumes rise faster than most organisations can hire skilled analysts.

New Risks AI Introduces

The same models that help defenders can leak sensitive data if staff paste internal information, logs or code into third-party tools with unclear retention policies. There is also the danger of over-trusting AI-generated analysis or summaries, especially when models hallucinate or reflect bias in the data they were trained on.

Regulators and insurers are watching closely, and boards are starting to ask who owns AI risk inside the organisation. Security leaders should be expected to have an opinion on AI Governance even when the technology is being driven by operational or development teams.

What Security Leaders Should do Next

For 2026 planning, add AI-enabled threats explicitly into your risk register and threat modelling discussions, especially around social engineering and business email compromise (BEC). Treat deepfake-enabled fraud and automated credential-stuffing as realistic scenarios for tabletop exercises rather than edge cases.

At the same time, create simple guardrails for staff using GenAI tools (what data can be shared, which tools are approved, how outputs should be verified) and ensure your SOC and incident responders are well trained to use AI-assisted features safely.

Done well, AI becomes another force multiplier for your defenders, not just a new superpower for attackers.


Discover more from The Security Brief

Subscribe to get the latest posts sent to your email.

Leave a comment