TLDR: Phishing attacks have reached the highest levels ever seen. Bad Actors are abusing convenient and well-known platforms to craft emails, web pages and surveys and send them to millions of target victims. Cybersecurity companies, like Inky and Cyren, have successfully detected phishing campaigns that use these platforms (examples shown below). These platforms have little oversight and traceability for their subscribers, allowing anyone to setup and account and start a phishing campaign spoofing one or more brands. The question is should they have responsibility? The European Government says yes, as the new Digital Services Act makes these ‘Intermediary’ companies as responsible for the content they host and transmit as companies like Amazon are being made responsible for listing and selling fake goods. The challenge in much of this is detecting brand spoofing attempts. which is not easy. But there is an answer…Computer Vision.
It’s fair to say that phishing attacks have reached epidemic levels. In fact the APWG (Anti Phishing Working Group) reported 1,025,968 phishing attacks in Q1 2022; the highest quarterly number in their reporting history and the first time they have seen phishing attacks exceed one million. It is also notable that this number is 67% higher than the same period in 2021 (611,877).
But how can it be that in an era of unprecedented development in phishing detection technology, that phishing attacks continue to grow?
Depending on the source you read, anywhere from 85% to 95% of all phishing attacks start with an email in the form of a social engineering attack. These types of attacks don’t need to be technically sophisticated. In fact, it’s better if they are as simple as possible. Bad actors know this, so rather than focusing on technically complex attacks they look to leverage a common human weakness – trust. If they can convince a recipient that the email is genuine then they quickly become a victim because they will take the intended action.
In this previous video, we highlight how bad actors use brand spoofing techniques as well as visual evasion detection techniques to land in target inboxes and confuse victims:
But in recent times they have also learned that they don’t need to build hosting and delivery systems to distribute their attacks, but rather they can simply leverage readily available commercial systems.
Group-IB highlighted just how big and widespread some of these phishing campaigns can be. One campaign they tracked targeted users in 90 countries utilising fake surveys and giveaways purporting to be from popular brands to steal users’ personal and payment data.
In the article they noted that:
“There has been a sharp increase in the number of brands impersonated and domains involved since we started to observe the scams involving the use of the targeted links technology.
Whereas in the past [when] the scam actors used dozens of well-known brands in their schemes, there are now more than 120 brands impersonated by scammers operating targeted links and at least 60 different domain networks as part of the ongoing scam campaign observed by the Group-IB DRP (Digital Risk Protection) unit.”
By using this broad network of 60 domains it is estimated that they were able to target almost 28 million people.
These advanced platforms are designed to handle massive volumes of emails at very little or zero cost, and there are enough of them, with low threshold to entry, that utilising multiple systems to achieve your goals is not too onerous.
If you were to write a recipe for a phishing attack the key ingredients would be an email, a form and a web page, either to drive people to the form or to host it. Bad actors could build platforms for each of these, but why do so when every one of these is readily available, and often for free?
These platforms all have very legitimate users in the area of digital marketing and research, but have found themselves hijacked for these nefarious uses.
It started with an email crafted and sent through Campaign Monitor. This email related to a fake COVID-19 funding assistance program for employees in specific target companies:
If the recipient clicked the blue link they were taken to this page on Campaign Monitor where they were asked to login using their company credentials:
In this example, again identified by Inky, employees in target companies received this email from an abused Hotmail account:
When the link was clicked they were sent to a Mailchimp survey page that was designed to harvest employee credentials:
This next example was detected by Cyren. It starts with an email informing the target that they have pending email messages that need to be unblocked:
If they click the link they are taken to this page created and hosted on the WIX platform:
It seems unfathomable that anyone who would see this, with the Wix banner at the top could believe it’s authenticity, but enough do to make this simplistic approach worthwhile.
In this interesting case, found by Cyren, the bad actor makes use of multiple systems in a multistage attack. Firstly the user receives an email about a newly shared encrypted document. To view the document, the user should click on the “Click Here to view” link.
Once the link is clicked they are taken to a fake Sharepoint page hosted on the Zyro web CMS system:
The final step activates when the user clicks the “Preview Document Here” link, as they are taken to a fake Office365 login page hosted on Weebly.
One could argue that a web CMS, email or survey platform can hardly be held responsible for bad actors exploiting it. But that could not be further from the truth.
How would you react if your bank allowed anyone to abuse its systems and you received letters asking you to transfer funds? How about if you received an email from your local hospital asking you to give information because anyone could simply login to its database and start sending emails.
I understand that these institutions don’t work that way and only a compromise would allow such an event to occur, but doesn’t that make it worse? These typical marketing systems can be accessed easily by anyone who sets up an account using a generic email system. From there they can pretend to be any global company and make use of their sophisticated design and transmission systems to impact on the welfare and lives of millions of companies and individuals. Don’t these platforms have an obligation to enforce some form of traceability for users, and if said user creates something that purports to be from a company in which they have no affiliation, shouldn’t the system detect this and block it?
Well the European Government at least thinks so and that’s why their new Digital Services Act includes provisions that puts a responsibility on these intermediary companies to do this and more, making them as responsible for the content they host and transmit as companies like Amazon are being made responsible for listing and selling fake goods.
Much of it will require them to be sure that they can identify their customers rather than allowing an account to be created with a simple Gmail address. But there is also a content moderation task to be considered in order to stop the issue of brand spoofing.
Most anti-phishing systems rely on a programmatic approach to detect and block phishing attacks, but they can often be fooled specifically because of the trust that these platforms have built. After all, it’s not unusual for an employee of a company to receive an email or survey from a company that initiated from one of these platforms. They work very hard, after all, to build their sender and overall reputation. The key therefore is to understand the context of the message and the authority of the sender to send it.
The only answer in this case is to use Computer Vision (Visual-AI) to analyse every aspect of the content and provide valuable signals as to its veracity. This process doesn’t supersede the traditional programmatic analysis that already happens, but rather enhances it with additional signals. But even this discussion should be moot, because if these platforms were to implement computer vision analysis at source they could be made aware every time a high-risk brand or government logo or icon was used. It would flag when an image was placed that contained text, with it even extracting the text from the image, making it readable by computer systems to look for trigger words.
This video explains it well:
If your platform suffers from this form of abuse and you’re looking to protect it, make contact today. Simply click the Book A Demo link below to get started.Book A Demo
Reading Time: 5 minutes A close examination of the APWG Phishing Trends Report (Q1 2022) TLDR: Our previous article on this subject focused on the historical […]Anti-Phishing Cybersecurity
Seamlessly integrating our API is quick and easy, and if you have questions, there are real people here to help. So start today; complete the contact form and our team will get straight back to you.