• 07 3184 7575
  • May 4, 2026
  • 0 Comments

Recently, we came across a thought-provoking article about how cyber attacks are changing as we move into 2026. What stood out wasn’t a new software vulnerability or a complex technical exploit — it was the idea that the most successful breaches are now more likely to exploit trust than technology.

With artificial intelligence advancing rapidly, attackers no longer need to “break in” the traditional way. Instead, they’re learning how to sound convincing, look familiar, and behave in ways that feel normal. And that changes everything.

From Human Error to AI-Driven Deception

Social engineering has always relied on human behaviour. Emails that create urgency, phone calls pretending to be from IT, or messages that look like they came from a colleague have all been used for years. What’s changing now is not the idea — it’s the quality, speed, and scale.

AI allows attackers to generate convincing messages instantly. Emails are well written, context-aware, and tailored to the recipient. Messages can reference real projects, job roles, or recent activity. Voice calls sound natural and confident. None of this requires deep technical skill anymore — it can be automated.

The result is that attacks feel less like “scams” and more like everyday business communication.

When Phishing Becomes Personal — at Scale

In the past, highly targeted attacks took time. Attackers had to manually research individuals, which limited how many people they could realistically target. AI removes that limitation.

Today, attackers can automatically gather information from public sources such as social media, company websites, and online profiles. That information is then used to craft messages that feel personal — but delivered at the scale of traditional mass phishing.

This is why phishing no longer feels generic. An email can be sent to thousands of people, yet still feel like it was written just for you.

The Rise of Autonomous Attacks

A major shift underway is the emergence of agentic AI — systems that can operate with minimal human oversight.

In practical terms, this means AI tools that can:

  • Identify suitable targets
  • Choose the most effective communication method
  • Adapt language and tone mid-interaction
  • Follow up if the first attempt doesn’t work

Instead of a single email, a target might experience a sequence: an email, then a message, then a call — all reinforcing the same story. This turns social engineering into an ongoing interaction rather than a one-off attempt.

Deepfakes Change the Game

AI-generated voice and video are no longer experimental. They are already being used in real-world attacks, with significant financial consequences.

In 2024, attackers attempted to impersonate the CEO of WPP. The attempt combined a cloned voice, a fake WhatsApp account, and publicly available video footage to make the interaction feel legitimate. While the attack was unsuccessful, it demonstrated how quickly multiple AI techniques could be combined to mimic a real executive interaction.

Not long after, a similar approach was used successfully against the Hong Kong branch of a multinational company. Staff believed they were participating in a genuine video call with senior leadership. The faces looked right, the voices sounded familiar, and the instructions appeared reasonable. As a result, the company reportedly lost around $25 million after employees followed what they believed were legitimate directions. No malware was involved. No systems were hacked. Trust alone was exploited.

This risk is increasing as AI tools improve. In late 2025, OpenAI released Sora 2, a video generation system described as more realistic and more controllable than earlier versions. While tools like this have many positive and creative uses, they also make it easier to produce convincing fake videos. As realism improves, it becomes increasingly difficult for people to rely on sight or sound alone to confirm identity.

What makes deepfake attacks particularly dangerous is that they align with normal workplace behaviour. People trust familiar faces, respond to authority, and act quickly when a request sounds urgent or reasonable. When those instincts are exploited at scale, even well-trained staff can be caught off guard.

Why Psychology Matters More Than Ever

Social engineering has always worked because humans are wired to trust. We respond to familiarity, authority, urgency, and emotional connection. AI doesn’t change that — it enhances it.

Instead of a single pressure tactic, AI-driven attacks can build rapport over time. They can mirror communication styles, reinforce emotional connection, and gradually steer decisions. In some cases, victims may even defend the fake persona because the interaction feels consistent and real.

This marks a shift from “take it or leave it” scams to relationship-based manipulation.

The Browser Becomes the New Front Line

Email has long been the main entry point for phishing, but that’s changing. Increasingly, attacks are happening directly in the browser.

We’re seeing:

  • Fake login pages that closely match real services
  • Poisoned search results that lead to malicious sites
  • Fake verification steps or CAPTCHAs that trick users into running code

Because these attacks happen in a familiar browser window, they often feel safer than suspicious emails — which makes them more effective.

What This Means for Businesses

The uncomfortable truth is that cybersecurity can no longer rely on people spotting obvious scams.

In a world of AI-assisted deception:

  • Attacks will look professional
  • Messages will feel relevant
  • Voices and videos will sound authentic

Defence needs to assume that trust can be manipulated.

A CSB Perspective

At CSB, we believe the future of cybersecurity is not about expecting people to be perfect. It’s about designing systems that reduce the impact when trust is exploited.

That means:

  • Strong identity and access controls
  • Multi-factor authentication wherever possible
  • Device-level protection
  • Awareness that reflects how modern attacks actually look

AI is changing how attackers operate — but it also gives defenders new tools. Understanding this shift is the first step toward building resilience in a world where trust itself has become the attack surface.

Previous Post
Social Media and Messaging Apps: Simple Security Lessons for Everyday Use