Bad Actors (a nice name for online scam artists) are cleverer than we might want to admit. As consumers of content, we have become much more visual. We recognise brands by the colours they use and their logos, even the typography they typically use. We consume our content with a glance, absorbing videos, images and infographics in seconds. We also live and work at a more frenetic pace than ever before, so attention spans are short. Bad Actors know this and so, of course, they exploit it.
They do so because they understand that including these elements builds trust in their targets. Where victims see the logo of a bank and a padlock symbol, it creates the impression of legitimacy and security. Someone gets an email that includes the Netflix logo, telling them there’s a problem with their credit card, just as they were looking forward to watching a few episodes of their favourite show that night – you can bet many will click the link and ‘log in’ to try and fix the issue.
The end goal for bad actors is not just to get their login for Netflix. Once the scammers have those details, they can get other critical information that could then be used in social engineering attacks. Scammers know that many people use the same email address and password for multiple accounts, for example. Once they have the details for a relatively minor account, those details could very well let them into a more meaningful account, such as a bank or payment system. These graphical phishing attacks are often difficult to spot and many phishing detection systems are not yet equipped to spot the more imperceptible efforts.
So, whether you are an internet user who wants to be aware enough to protect yourself and your workplace, or you are an anti-phishing platform that protects entire organisations from being compromised; this list will be of interest.
The most exploited of visual elements is the logo. Bad actors use them in emails, documents and websites; not only because of their confidence-inspiring attributes but because most phishing detection systems simply don’t do a good job of detecting them! Bad actors will make it worse by using an outdated version of the logo or modifying the logo slightly to confuse detection systems.
It works so well that according to Ironscales, attackers have spoofed the world’s top 200 brands to create 50,000 fake login pages. Even more worrying, nearly 5% (2,500) of the 50,000 fake login pages were polymorphic, with one brand spinning out more than 300 permutations. According to Vade Security, among the most often imitated brands are PayPal, Facebook, Microsoft, Netflix and Whatsapp, all in graphical phishing attacks. So the next time you get an unexpected notification or email from these brands, look twice.
Many industries have marks associated with quality and trust. Bad Actors are ingenious again because they will often utilize these special marks in their graphical phishing attacks that denote safe and secure online environments to lure users into a false sense of security. For instance, in a combined Harvard/UC Berkeley study, researchers identified that, among other things, 23% of participants did not have a good understanding of the padlock icon, with some thinking that it was more trustworthy when shown within the content area than in the browser header.
But in the same way that logos are difficult to identify, these industry marks are equally hard as not only are there so many variants, but they can be arbitrary in nature, with most people not knowing if a ‘Hacker Safe’ or ‘Verified By…’ mark is genuine or fake.
This little icon that appears in the tab of a website has become as significant as the main company logo when it comes to a user’s trust in a website. With that in mind, it should come as no surprise that this is yet another one of the visual elements exploited in online phishing.
The same study showed that some participants thought that the presence of favicons indicated authenticity because they “cannot be copied”.
Favicons are harder to detect compared to logos because they are very small and low resolution, meaning there are fewer pixels to analyze.
The context of logo and branding usage is often what gets flagged first, but often, Bad Actors will make an effort to place the logo or mark somewhere more unambiguous. This is done in the hope that it will evade phishing detection systems that only raise a red flag when the mark is a specific location, and in the hope that it will still build false trust with the end user.
So, what does all this mean? As these are the most common visual brand elements exploited in online phishing, anti-phishing platforms need to be able to spot them. Phishing detection platforms have very successfully used both supervised and unsupervised AI alongside more traditional techniques such as ‘signatures’, heuristic rules and other techniques for identifying the highest volume of phishing attacks possible; but attackers know this and even use their own AI to develop countermeasures.
Identifying these visual cues could be an excellent combined early-warning and filtering system, with high-risk emails or sites then being sent for further analysis by their non-visual AI systems.
If you believe that your phishing detection system could benefit from the introduction of Visual-AI (Computer Vision), read our white paper to learn more.
Seamlessly integrating our API is quick and easy, and if you have questions, there are real people here to help. So start today; complete the contact form and our team will get straight back to you.