PREVIOUS ARTICLENEXT ARTICLE
FEATURE ARTICLES
By 21 August 2024 | Categories: feature articles

0

AI is everywhere, which makes it more important to ensure your company’s security posture meets today’s challenging threat landscape. Assad Arabi, Regional Managing Director Africa and Venture Markets at Trend Micro discusses the latest AI security developments and how these are being used to protect digital environments.

The adoption of artificial intelligence (AI) tools is happening at a rapid pace. This new wave of technology started with web applications but has more recently evolved to AI-powered devices. The AI PC is set to become an integral part of the work environment with the IDC predicting that by 2027 these devices will account for 60% of all PC shipments worldwide.

The manufacturers of these devices are aiming to tap into the increased use of AI by organisations and their employees. Thanks to the development leaps in generative AI, we’re seeing everyday users harnessing this technology for productivity gains. A recent survey of 16 000 white-collar workers globally found that 96% said AI would be of benefit to their jobs and more than 50% said they used AI weekly at work. Whether it’s shifting repetitive tasks to AI, analysing huge volumes of data or simply drafting an email, AI has proven itself to be a powerful tool in the workplace and employees are most certainly using it.

However, as is the case with any significantly powerful technology, AI also introduces new vulnerabilities and opens up new attack pathways for cybercriminals to leverage. With this in mind, it’s important to approach AI security in two ways. The one being AI for security and the other being security for AI.

AI-enabled security evolves beyond email

The first approach is in no way new to the cybersecurity industry and sees security platforms using AI to mitigate attacks and protect businesses. One example of this is in email security where AI-powered tools have been integrated for more than 10 years. Extended detection and response (XDR) is another example where AI has been present for more than seven years.

However, in the last year, cybersecurity infrastructure has been harnessing the power of generative AI to transform protection. Notable developments have been in the XDR space where generative AI is being used to optimise risk assessments and predict the path of an attack.

Risk assessments are infused with a company’s context such as the industry it operates in and its historical security data. This helps to build a risk assessment score and also provides a financial estimate of the impact of a potential attack.

More recently, attack path predictions are supporting companies in improving their security posture. This method uses pattern and behaviour analysis to identify trends and threats that human analysts may miss. These predictions can help to streamline incident responses and provide detailed reports and optimal action suggestions.

While AI tools are being embedded and deployed within security platforms to provide organisations with a more resilient and robust security posture, businesses and their security teams need to look within to avoid gaps. This is where the second approach, security for AI, plays an important role.

Achieving security for AI systems

November 2022 was the start of a monumental shift in AI with the launch of Open AI’s large language model (LLM) ChatGPT. Since then, a once inaccessible technology for the masses became ubiquitous. Beyond ChatGPT, more LLMs have been launched to the public with Google’s Gemini and Meta’s LLaMA2 offering easy access to generative AI capabilities.

It is these publicly accessible generative AI tools that can pose a security risk to a company’s private data. Knowing this, it might be tempting for business leaders to enact bans similar to the restrictions we had around social media all those years ago, but this is easier said than done. Many users can access these tools via their personal smartphone devices with the likes of LLaMA 2 already embedded in everyday apps like Instagram and WhatsApp.

To secure these platforms, it’s better to work with employees and provide them with the tools they need to use these LLMs safely and securely. Regular security training should incorporate a segment on AI best practice in the workplace. By establishing clear policies on AI’s use and governance, businesses can build the right foundations for a secure AI-powered workforce.

And while employees may use generative AI tools for their own productivity gains, there’s no doubt criminals are using them too.

LLMs are already aiding cybercriminals and improving the success rates of their fraud tactics. Business email compromise (BEC) is one area where we are seeing vast improvements in phishing emails. With the help of generative AI, cybercriminals are able to draft convincing emails that contain few spelling and grammatical errors.

To combat this, security vendors are using AI to develop and deploy writing style analysis tools. Within the Trend Vision One platform, for example, security teams can train the Writing Style DNA tool in the particular writing style of an executive. Using this information, the platform is able to compare incoming and outgoing communications and flag any suspicious emails.

Beyond BEC, cybercriminals have been able to improve deepfake videos and audio in an attempt to impersonate executives. In response to this, deepfake detection and protection has become a priority for security vendors. With the help of AI, deepfake detection technology is able to analyse pixels for facial movements, voice patterns and other biometric markers to determine if a video or audio file is fake.

With the widespread adoption of AI in the workplace, organisations cannot stand still when it comes to their security posture. This latest technological wave continues to change and evolve at an intense pace.  Businesses will need to work closely with their security partners to ensure the safety of these AI systems, while also fending off bad actors attempting to use this powerful technology against them.

USER COMMENTS

Read
Magazine Online
TechSmart.co.za is South Africa's leading magazine for tech product reviews, tech news, videos, tech specs and gadgets.
Start reading now >
Download latest issue

Have Your Say


What new tech or developments are you most anticipating this year?
New smartphone announcements (45 votes)
Technological breakthroughs (28 votes)
Launch of new consoles, or notebooks (14 votes)
Innovative Artificial Intelligence solutions (28 votes)
Biotechnology or medical advancements (21 votes)
Better business applications (132 votes)