Almost two thousand years ago, Epictetus, a Greek philosopher penned this quite important quote:
“Not every difficult and dangerous thing is suitable for training, but only that which is conducive to success in achieving the object of our effort.”
This, in my view, has never been more true than in the area of phishing, and specifically phishing training. You see, I believe that much of the training that staff in organisations receive on this subject is simply setting them to fail. We cannot, after all, expect a group of people, often with little or no technical expertise, to be used as a last-resort barrier against phishing attacks in an organisation. Yet many companies do!
I guess it’s understandable to some degree. Up to this point phishing detection technology, although getting more powerful and reliable by the day, thanks to advancements in AI, has not been foolproof, and certainly hasn’t yet reached the success level achieved in anti-spam; (when was the last time you got more than a handful of spam emails in a week?!)
The natural reaction, therefore, is to use employees as de facto anti-phishing barriers.
But there are a number of issues with this:
Much of the time, effort and money in providing training to staff is focused in an attempt to
educate them on the typical attack vectors that bad actors use, and what red flags to look out for. However, in a joint research study by Harvard and UC Berkeley, the results showed that even highly computer-literate individuals were unable to successfully identify fake emails and websites; even when expecting to see them!
The best-produced fake site was able to fool more than 90% of participants, while the average was just under 40% of participants thinking that a fake site was genuine. This means that of every ten employees in a company, four of them would be likely to take an action on a phishing website, compromising themselves and the company!
In their dark world, a phishing campaign that garnered four engagements for every ten views would be a wild success, delivering anything from email login credentials to bank account details or even just small ‘breadcrumb’ type details that help the bad actors prepare for a major spear-phishing attack or ‘whale’ attack against a CFO or VP of an organisation.
It seems clear to me that without very intense, lengthy, and specialised training, (which is unrealistic), you could not expect a reasonable level of protection from staff.
OK, so is the answer purely technology? Sadly, no. Leading a technology company at the very cutting-edge of phishing detection technologies, I’d love to say it is, but I believe that there will always be a place for staff training, but based on a different approach to that of today.
However, implementing the wrong system can further exacerbate your issue. It was Douglas Adams who came up with the concept of the ‘Somebody Else’s Problem Field’. A technological marvel that made people simply not see what was obviously right in front of their noses because they relied on someone else to deal with it – so it was ignored. In this case it’s not a big shiny spaceship in the middle of a field, but a phishing email or website.
This is highlighted in an interview with security experts by ZDNet, where they illustrated that because an anti-phishing system is in place, staff can actually become more likely to trust the communications they do receive!
Oh for the good old days of email phishing, where you were told, in mangled English, that you’d inherited a fortune from a far-flung prince but needed to transfer $1,000 dollars to get your millions. No, these days, the communications can be very, very believable and can come not just through email, but voice, SMS or chat app.
Remember those breadcrumbs? Over time, an employee can participate in fake online surveys, polls and seemingly innocent questions. But add all that data together and it can be weaponised to create a very convincing communication/request. How can you train for all that?
The solution to this issue is in three parts:
Look at every possible vulnerability in your business and create a process for dealing with it. For instance, what’s the process that a mid-level accounts person should follow if they receive a request from the ‘CFO’ to transfer funds to a bank account they’ve never seen before? Let’s unpack that…
Email comes in – super-urgent:
“Doris, I need you to transfer $50,000 to XYZ bank account urgently. Please reply as soon as it’s done because I’m waiting here with Bob [the CEO’s name – they’re sneaky right?] to close a deal, but they need these funds.”
OK, so the process for this could be real easy – no matter how urgent the request, Doris picks up the phone to the CFO and calls to confirm the request and then also calls the CEO to double-confirm it. That’s a simple example, but you get the idea – instead of Doris trying to figure out if the email is genuine or not, she assumes it is, but follows the process to confirm, using authorised phone numbers for the key staff in question.
Contrary to what you might believe having read this article so far, I am actually a big advocate of training. But it must be the right type of training. So, once you have developed your processes, they need to be drilled into your staff. Make it clear in the training how requests must come in, what wording must be used and the process for confirmation that MUST be followed – even if it does come from the ‘CEO’ and no matter the ‘urgency’. Make sure those process documents are available in print to whoever needs them, say in a ‘processes folder’.
So I would be negligent as the CEO of a technology company not to highlight the importance of the right technology. Protecting your staff and business means choosing a solution that blocks and dumps the very highest percentage of fake communications – but especially those that use visual cues to build trust.
That email that has the Bank logo in it, but the link is wrong and it has the words ‘validate’ and ‘login’ – Yep, that needs extra special attention. But as we highlighted in our recent anti-phishing whitepaper, bad actors are extremely skilled at using evasion techniques – even using blackhat AI to evade the whitehat AI systems used by cybersecurity companies.
But there are solutions that take a more holistic and unique approach, embracing Visual-AI, to deliver the highest levels of protection. And leading cybersecurity/anti-phishing platforms are actively and successfully using Visual-AI alongside their analytical AI to block even the most determined and ingenious phishing attacks.
OK, so that’s my two cents.
Stop training employees who don’t have the expertise to succeed, but instead train them to follow safe-practice. By the way, the processes I’ve suggested also protect against fraudulent requests from rogue staff too. It’s simply writing down, and training for, best-practice. That can never be a bad thing.
If you agree with me, drop me a line. If you disagree, definitely drop me a line and let’s chat about it.
VISUA Marketing director discusses Visual-AI in cybersecurity with Aviva Zacks Aviva Zacks of Safety Detectives reached out to our own Franco De […]Anti-Phishing Cybersecurity VISUA News
Graphical phishing attacks are increasingly common Bad Actors (a nice name for online scam artists) are cleverer than we might want to […]Anti-Phishing Cybersecurity
Is your anti-phishing technology primed for graphical cyber attack vectors? 83% of cyber security professionals who responded to a recent poll by […]Anti-Phishing Cybersecurity Featured
Seamlessly integrating our API is quick and easy, and if you have questions, there are real people here to help. So start today; complete the contact form and our team will get straight back to you.