Reshaping the Threat Landscape: Deepfake Cyberattacks
By Industry Contributor 15 November 2022 | Categories: feature articlesFEATURES SPONSORED BY rAge EXPO:
By Anna Collard, SVP Content Strategy & Evangelist at KnowBe4 Africa
There’s no doubt that the cyber threat landscape is constantly evolving. With new technologies and trends emerging all the time, it can be hard to keep up. But one growing trend that organisations need to be increasingly aware of is deepfake technology and the potential for deepfake cyberattacks.
Deepfakes are realistic computer-generated images, audio or videos that are designed to mimic real people. And while they may seem harmless at first glance, they can actually be used for malicious purposes. For example, a deepfake video could be created with the intention of spreading false information or causing damage to someone’s reputation. They are also used by criminals to create “employees” for remote-work positions in order to gain access to corporate information.
Deepfakes are now a real and present danger to businesses and individuals alike. As deepfake technology becomes more sophisticated, the risk of deepfake cyberattacks will only increase.
Types of deepfake threats
There are three primary types of deepfake threats: misinformation, cyberbullying, and fraud.
Misinformation is the most common type of deepfake threat. It occurs when someone creates and disseminates false information with the intention of misleading others. This can be done for political gain, to sow discord, or simply to cause chaos. In some cases, deepfakes have been used to create fake news stories that spread quickly and widely on social media.
Cyberbullying is another type of deepfake threat. It occurs when someone creates a deepfake of someone else in order to humiliate or harass them. This can be done by superimposing someone’s face onto an embarrassing or explicit scene.
Fraud is the third type of deepfake threat. It occurs when someone uses a deepfake to commit financial fraud or identity theft. Within organisations, deepfake videos of senior executives could be the video equivalent of phishing and whaling attacks, and can be used to trick victims into divulging sensitive corporate and personal details or to make direct money transfers to scammers.
Readily available tools to create deepfakes
Advances in technology mean that creating deepfakes has become almost effortless. The software is available to the mainstream public – often as free, easy to use apps on their phones – resulting in more and more deepfakes appearing globally.
Beyond the free apps, there are paid-for platforms that allow for even more advanced deepfake creation – enabling threat actors to pull off more and more daring deepfake attacks.
Deepfake detection will become much harder
Deepfake detection is therefore also much harder because the technology used to create them has become more sophisticated.
There are some ways to detect deepfakes, but they are not foolproof. For example, you may be able to spot a fake video if the person’s mouth does not match up with the audio, or if the video looks “too good to be true”. However, as deepfake technology continues to advance, it will become harder and harder to detect these deepfakes.
Preparing your organization against deepfake threats
Deepfakes alone, according to Forrester, cost businesses over a quarter of a billion dollars in 2020, and the strategies to defend against deepfake cybercrime are still evolving. Researchers at Berkeley University have built an AI detection system that will detect whether a video is a deepfake based on things such as facial movements, tics and expressions. But commercial applications of these systems are still far off.
In the interim, until better systems are in place to detect and prevent these cyberattacks, organisation and their employees need to remain aware of the dangers of deepfakes and take steps to protect themselves - such as keeping data and systems secure, training employees on accounting controls that ensure multiple stakeholders have to sign off before any payments can be made, and remind them to remain sceptical of online content – some of the risks posed by deepfake attacks can be mitigated upfront. One of the best defense is teaching people to not trust anything they haven’t expected, even if it looks and sounds legitimate. Particularly if it has to do with payments or money transfers.
Most Read Articles
Have Your Say
What new tech or developments are you most anticipating this year?