PREVIOUS ARTICLENEXT ARTICLE
NEWS
By 10 December 2025 | Categories: news

0

By Beth Miller, Field CISO, Mimecast 

For years, the cybersecurity industry has warned that threats are becoming ‘more sophisticated.’ In reality, attackers aren’t getting smarter, they’re simply leveraging new tech to exploit the same long-standing vulnerabilities that remain unpatched and overlooked. 

In 2026, those gaps are widening. AI-powered phishing will become nearly impossible  to spot, driving 90% of breaches through hyper-personalisation. Employees are so overwhelmed they're turning to unauthorised AI tools to keep up, inadvertently creating a whole new category of insider risk. And security analysts get buried deeper under a relentless flood of alerts, each one a potential threat, as they manually triage, investigate, and close out false positives. 

Here’s a breakdown of what's actually changing in 2026, and what organisations should do about it.

Prediction #1 - Email will account for 90% of cyber attacks

Phishing is evolving, not fading. In 2026, email will remain the primary entry point for cyberattacks and drive up to 90% of breaches as AI further evolves and attackers double down on what works. Phishing incidents have already climbed from 60% to 77% of attacks in the last year, fuelled by AI that makes lures more personalised, fluent, and believable.

As collaboration tools tighten access and monitoring, more day-to-day work is pushed back into email, raising both volume and exposure. Attackers are shifting from spray-and-pray campaigns to highly targeted strikes, impersonating executives and key employees and layering in deepfakes to increase pressure and urgency. Even well-trained, vigilant employees can struggle to spot these attacks, creating a threat landscape where one convincing message can still open the door.

What security leaders should do now

  1. Invest in next-generation email filtering and threat detection that leverages AI to spot subtle, context-aware attacks.

  2. Move beyond annual phishing tests. Deliver ongoing, adaptive training that reflects the latest attack techniques and real-world scenarios.

  3. Strengthen executive protection by enhancing monitoring and safeguarding high-value targets like executives and finance teams.

  4. Conduct regular incident simulations and tabletop exercises to test and refine response and recovery plans.

  5. Integrate security across all communication platforms, including collaboration tools, ensuring protections extend beyond email and preventing data leakage across channels.

  6. Finally, build a rapid reporting culture that rewards employees for flagging suspicious messages through clear, simple escalation processes.

Prediction #2: Shadow AI supercharges insider threats

As organisations cut headcount and raise productivity expectations, employees are stretched to breaking point. This is a breeding ground for mistakes. Layer on shadow AI, and the risk compounds. In search of shortcuts, employees are adopting unsanctioned AI tools, pasting proprietary data into consumer apps or even training personal models on company information they can take with them when they leave.

The attack surface is expanding faster than most teams can track. By mid-2026, organisations could see ten times as many rogue AI agents as unauthorised cloud apps. Simultaneously, attackers are actively courting insiders and probing outsourced operations in lower-cost regions where controls may be weaker. Organisations need to treat people, AI agents, and access decisions as a single, connected risk surface rather than separate problems.

What security leaders should do now

  1. Move from "trust but verify" to "assume and hunt," using AI to proactively detect anomalous behaviour, data exfiltration, and shadow AI activity.

  2. Treat every AI agent as a first-class digital identity that is authenticated, monitored, and governed, and regularly audit for unauthorised tools and abandoned agents.

  3. Address chronic stress with workload management, mental health resources, and flexible work policies. Recognise that well-being is a security imperative.

  4. Deliver ongoing, role-specific security awareness that builds confidence and competence, making employees your strongest line of defense.

  5. Educate every employee on the risks and responsibilities of using AI tools and require explicit authorisation for any new deployments.

  6. Develop "AI orchestrators" - security professionals skilled at auditing, interpreting, and managing autonomous AI agents, not just traditional technical skills.

  7. Foster a culture of partnership between humans and AI, where oversight, judgment, and accountability are shared.

Prediction #3: AI is taking over triage

For years, security operations centers (SOCs) have been buried under a relentless flood of alerts - each one a potential threat. Analysts spend hours to triage, investigate, and close out false positives, only to watch the queue refill. It's a recipe for burnout, missed signals, and a backlog that never really goes away. 

In 2026, this Sisyphean grind finally meets its match. SOCs are leaning on AI agents to pull in context, correlate data across tools, and even resolve routine incidents before a human ever sees a notification. These systems continually learn from each alert, adjusting to emerging threats in real time. What once required days can now be resolved in minutes, allowing security teams to move away from constant firefighting to focus on more strategic, high-value work.

What security leaders should do now

  1. Deploy AI tools that automatically enrich, correlate, and group alerts, assign risk scores, and trigger responses on their own.

  2. Maintain clear oversight protocols and empower analysts to review and validate AI decisions, especially for high-risk or novel incidents.

  3. Leverage AI to proactively identify risky behaviours and deliver immediate, personalised feedback and training to correct risky actions before they escalate.

  4. Use AI to generate real-time incident reports and audit trails, simplifying compliance and enriching post-incident analysis.

  5. Train security staff to move beyond technical troubleshooting to become experts at auditing, interpreting, and managing AI-driven tools for strategic risk management.

  6. Stay alert to gaps in AI coverage (like unmonitored endpoints, abandoned tools, or rogue AI agents) and regularly audit systems to ensure critical assets are protected and AI processes work as intended.

  7. Encourage ongoing learning by regularly reviewing and updating AI systems to ensure they remain effective against emerging threats.


Leading through 2026

The challenges are real, but not insurmountable. The organisations that thrive in 2026 will be the ones that embrace the human-AI partnership, protect their people, govern their AI agents, automate what's drowning their teams, and build programmes agile enough to keep pace. 

USER COMMENTS

Read
Magazine Online
TechSmart.co.za is South Africa's leading magazine for tech product reviews, tech news, videos, tech specs and gadgets.
Start reading now >
Download latest issue

Have Your Say


What new tech or developments are you most anticipating this year?
New smartphone announcements (44 votes)
Technological breakthroughs (29 votes)
Launch of new consoles, or notebooks (14 votes)
Innovative Artificial Intelligence solutions (29 votes)
Biotechnology or medical advancements (24 votes)
Better business applications (160 votes)