[00:00:04.730] – Franco De Bonis
Hello and welcome to our second installment of a poke in the AI. I’m Franco De Bonis, marketing director at VISUA. And along with my guests today, we’ll look at the growing issue of graphical attack vectors in phishing attacks and and why traditional AI is struggling to stop it. There’s never been more focus or money spent on preventing cyber attacks. Despite this, the number of serious phishing attempts is growing every day, and the methods employed by bad actors are getting more sophisticated all the time. We know that AI has taken a leading role in all of this, but what are the challenges and is there a role for visual AI computer vision in all of this? Bad actors are adopting ingenious techniques using graphics in their attack vectors to not only fool humans, but to fool the phishing detection systems, too. It’s this aspect that we are going to delve into today, looking at the types of graphical attacks being employed and how Visual-AI can be used to ensure they are detected and blocked. Now, we are Visual-AI experts, but we’re not cybersecurity experts. So as we discuss this today, not only do we have our CTO, Alessandro Prest, and our VP of Sales and Marketing, Declan McGonigle, to cover our side of the subject.
[00:01:21.600] – Franco De Bonis
We’re also delighted to be joined by Dr. Budgie Dander, cybersecurity expert from PA Consulting. Now, Budgie, I met you on a panel that you headed up last year. I should say it was very enjoyable and I was really impressed with your knowledge and heritage in this area. So for the benefit of our users and listeners, could you just give us a short summary of what you’ve been up to and what you do?
[00:01:44.680] – Dr. Budgie Dhanda
Thank you very much, Franco. So I’ve been involved in and around cybersecurity probably for about 25 to 30 years now, working across pretty much all the sectors finance, telcos, transport. But my real focus tends to be around the government side, particularly defense and national security. But I’ve worked with NHS organizations as well, and it’s covered all aspects of it, starting off in the early days, reasonably technical. These days, I tend to spend more of my time around the strategy, policy, governance, and supporting boards in particular to understand the strengths of cybersecurity and mitigation actions that we need to take. And phishing and ransomware are still very much high on the list of priorities for pretty much everyone I speak to, so it’s a very timely conversation.
[00:02:26.640] – Franco De Bonis
Brilliant. Thank you for that. And I look forward to getting all your insights on this. It’s going to be very interesting. So let’s get started then. So there’s a lot of talk about brand spoofing, but not so much around actually this growing thing of graphical attack vectors and how graphics have been weaponized. So I want to delve into that first and discuss that. Declan, you’ve been talking to a lot of companies in terms of how this could be implemented and used and what they might do with it. So how widespread an issue is this in terms of this concept of graphics being used as a weapon?
[00:03:06.170] – Declan McGonigle
Yeah, thanks, Franco. So we’ve seen with a lot of the conversations with our clients and potential partners at the coalface, they’ve been telling us this. But if you even look at the kind of industry stats, the number of unique phishing sites detected grown by over 300%, unique phishing email subject has grown by over 200%. And a worrying one is the number of brands being spoofed has grown by over 50%. So we see even in the Mimecast state of email survey that like 42% reported that brand spoofing is a growing danger and 47% reported enough swing and spoofing attacks. So allied to this is the fact that a lot of employees and they’re clicking on three times as many malicious emails as they were in 2020. So what these stats show is that not only are the numbers increasing in terms of bad attractivity but the techniques are working on the mails and spoofing web pages are actually making it through the company’s defences. And what we’ve seen is a growth, obviously in the area of the graphical attack vectors adding to kind of what was there before and growing exponentially.
[00:04:13.890] – Franco De Bonis
So they’re really scary numbers. Budgie, pardon the pun. You’re at the coalface, you’re doing this day in, day out and dealing with these problems. What does it mean? Can you put that into context what it means to companies you work with?
[00:04:32.350] – Dr. Budgie Dhanda
There’s some really good statistics there from Declan, but I think you need to look behind the statistics because behind every single one of those numbers there is a real victim. Being an individual, be a Corporation, be a government and we’re seeing more and more of these. I mean, Phishing was on the rise anyway, and all we’ve seen now, particularly during the whole lockdown phase with more people working from home, there have been more opportunities for the bad actors to find new ways in. Be it people being rushed to make decisions or panicked, to buying things online, PPE or whatever else, be it organizations being under stress, moving away from working in the office, working to Homebase, and all of sudden a opening up a whole new threat surface. So behind every single one of these is a crime. It’s not victimless. And if you look at the scale of the impact on many of these organizations and individuals you’re looking at for an individual, it could be thousands or tens of thousands of pounds that gets lost or systems destroyed. But if you’re a big organization, not only could you end up having your systems compromised, your data leaked out onto the web, being held to impacted by ransomware, having to pay large fees to these bad actors if you want to try and get your data back, or actually, if your data is leaked outside, you’ve been hit by big fines by the ICO. So it’s a huge issue, and it usually is. And if it’s not, it should be high on the list of any boards, decisions and risks every time they meet.
[00:06:09.680] – Franco De Bonis
So if I’m reading between the lines, this move to homeworking, this move to online, you’ve got a bunch of people who aren’t necessarily experts. Bad actors don’t need to be exponentially more ingenious. They need to just be a little bit more ingenious, a little bit more believable to get even better results. Is that what I’m hearing from you?
[00:06:30.190] – Dr. Budgie Dhanda
But what we tend to find is actually they are becoming more ingenious. They are ingenious already. And what we find that people are trying to defend the systems are often playing catch up. These bad actors have access to all the latest technology, and they’re very quick to exploit anything new that comes along. So it’s a constant game of playing catch up.
[00:06:49.520] – Declan McGonigle
I think it’s also relevant as well. When you talk about the victim, there’s no victims crime, over here our health service authority was held ransom, which when you look at that in big picture, how many people were affected by that and are still being affected by it. So it very much is not a victimless crime. It’s very much a very serious crime.
[00:07:07.860] – Franco De Bonis
Yeah, absolutely. So in terms of the impact, the impact of these changes to how bad actors are implementing stuff and what it means for the success of those campaigns, can you outline that from your perspective, budgie?
[00:07:29.790] – Dr. Budgie Dhanda
Every time they come up with a new way of attacking or refine their way of attacking, and they increase the number of successful attacks. Even a 1% increase in the number of successful phishing attacks is a huge volume of crime. There are millions and millions. Many of the systems that you find on big corporates, they’re being hit by millions of different attacks a day, in an hour, or even in minutes. So a very small percentage increase in their ability to make their phishing email and usually phishing emails because that’s still one of the most common ways of getting malware into a system. Making that more effective is great for them. So we need to find new ways of adapting to this and AI and machine learning has been something that’s been coming along for some time. It’s not the panacea at the moment but every new technique that we can find to try and reduce that threat is something that we got to really look at.
[00:08:33.510] – Franco De Bonis
And what’s really interesting is even now there’s a lot of focus on B to B corporate and stuff but actually this goes beyond just that. Is that right?
[00:08:42.870] – Alessandro Prest
What we have observed is that the breadth is pretty wide, the spectrum of brands being spoofed and therefore the target victim is pretty wide. So while in our intuition we might think that those are mostly banking and insurance companies being spoofed, actually there’s a whole lot of if you want to call it B to C as in targeting final consumers of specific products which are not necessarily financial so we can think of we’ve seen a lot of for example online gaming phishing attacks so you will be receiving an email that says that your Minecraft account is about to be blocked click on this link to restore it, Roblox, so all this kind of platform where especially the younger audience to play and gather on so effectively those type of attacks in the eyes of the attacker have the benefit of selecting for a younger more gullible audience, if you will. And so we really see this kind of expanding and across the whole spectrum of online services, if you will, that people use on a daily basis.
[00:10:20.970] – Franco De Bonis
So let’s get into the core of the thing, which is this concept of graphical attack vectors. And Ale, let me put this to you. What does that actually mean? How are graphics being weaponized in this way?
[00:10:38.670] – Alessandro Prest
Yeah. So it’s an interesting way to circumvent a number of checks and technologies that people have deployed over time to prevent phishing. So effectively means hiding information inside the pixels as opposed to the HTML code of an email or a Web page. And because the security industry historically have developed extremely sophisticated methods to detect this type of attack at a code level, now the attackers are moving into a visual level. So creating spoofing and phishing, let’s say, material that are presented to the user as an image. But an unsuspecting victim wouldn’t really distinguish whether the Web page is looking at is actually generated by code or it’s just an image and by packaging it as a visual element, as an image. This allows to bypass most of the existing anti phishing software because they will be only looking in the code of the email or the Web page, which in this case looks nonthreatening enough because simply displaying an image, maybe with some hyperlinks areas, some hot areas where people will be clicking, for example, to submit some information. And that is exactly the trigger then for the attack to begin. So effectively, this is the definition for this new vector, which is the graphical attack vector.
[00:12:41.530] – Franco De Bonis
And am I right that there’s two parts to this? One is let’s call it visual confusion, confusing recipients. I don’t like to call them victims, but recipients of miscommunication. And another part is visual evasion, which you touched on also, which is evading detection. So I think two good examples of that, if I’m right, is putting a genuine link into an image, but behind it, you would put a short code to your bad site. Is that. I got that right.
[00:13:18.710] – Alessandro Prest
Yeah. We’ve seen it happening in different ways. So there’s definitely that element of hiding a single link within an image that is going to take the target, the potential victim, to a separate website. And from there, the attack begins. But we’ve also seen much more sophisticated approaches where there will be an image serving as the background on the website. And now you have areas where you requested to put in your password or your email address. And those will be actual proper input areas where the information will be captured right there. So I guess there is a continuum of approaches here from more sophisticated to, let’s say, more plain ones. But the concept is the same. So you would be hiding the information that makes that page credible and believable inside the pixels so that traditional techniques will be blind to it.
[00:14:41.190] – Speaker 2
Declan you’re having some very specific conversations with some of these partners and even prospective companies we’re talking to. Are there any other sort of techniques that have been highlighted recently?
[00:14:55.660] – Alessandro Prest
I think the one that comes to mind really is QR codes. So I think people are aware of the links, they’re aware of the use of logos, et cetera. But QR codes are now becoming another one that we’re seeing as an attack vector that’s being used widely. And we’ve started to try and address this.
[00:15:15.330] – Franco De Bonis
And presumably that’s particularly effective at targeting like a mobile.
[00:15:20.950] – Alessandro Prest
It’s mainly at the mobile.
[00:15:24.090] – Franco De Bonis
[00:15:25.290] – Alessandro Prest
I think we are at a stage now where people are becoming much more familiar with the QR code way to bridge, to easily reach a link. The users will be predominantly scanning the QR code from mobile. And so that creates a very dangerous attack surface, because now you can leverage on a number of assumptions that the user is using a mobile system to access the QR code. And there isn’t much built in protection currently in those QR code scanners. So it represents a pretty effective attack surface right now.
[00:16:18.010] – Franco De Bonis
Okay, so let’s kind of get a baseline to some degree, because we have a better understanding of what graphical attack vectors are. Various mechanisms that’s fine. Graphical mechanisms. Let’s understand from an AI point of view, Budgie. And obviously a lot of our listeners and viewers will understand a lot that we’re implementing AI themselves. So we don’t need to go into a big, long discussion about it. But what’s the point of AI in cybersecurity? What is it used for? Basically?
[00:16:57.040] – Dr. Budgie Dhanda
So the big rationale for using AI is a number of rationales. One is you can’t do this manually. We need to find a way of automating the way we go about scanning, particularly for phishing emails, to try and eliminate them as early as possible. The old school, very simple techniques of scanning for keywords, blacklisting and whitelisting sites only went so far because you had to make a small tweak and it would get through because the signature changed. What AI helps us to do is look for patterns, looks for anomalies, looks for context. So phrases what the email may be asking for, and then linking that to any other contextual information that may well be in there. I know we’ve come to this. One of the uses of putting graphics in there is building confidence in what’s actually in there. So if you’ve got an email from a reputable brand in there, linked to a requirement to enter a password or reauthenticate yourself or whatever, there’s a confidence piece in there. But actually also what you find is that the AI looks for that or can look for that, but it also looks from where the email may have come from.
[00:18:12.430] – Dr. Budgie Dhanda
So that’s part of it. I think the difficult part of this is AI can also be used for the other side as well. Who are constantly finding ways of actually exploiting AI to try and find ways around the filters in the first place.
[00:18:26.550] – Franco De Bonis
Yeah. And I was just going to get into that because what’s really interesting is the likes of Nicole Egan, currently CSO, but ex CEO at the time at Darktrace talked about a future where it was going to be AI versus AI. Now, to some degree, is the use of AI reacting to what bad actors are doing, or is it ahead of the game or how did that come about?
[00:18:54.630] – Dr. Budgie Dhanda
I think we’re in almost a game of who’s got a better AI? Ai has been around for a long time doing all sorts of weird and wonderful stuff, everything from creating book works of art. But that sort of imaginative use of AI can be used just as much to help develop new malware or modify malware in such a way that will get through your current filters. So it’s a game of constantly playing catch up. They’re changing the patterns. We’re looking for new patterns and trying to spot anomalies in patterns. So it’s been around for a while. And I think all we’re finding is that with the availability of AI at a lower price point now for many people and people becoming familiar with it, it’s becoming more endemic in its use in both defensive and offensive cyber.
[00:19:40.890] – Franco De Bonis
And so we kind of call this I don’t know if this is an industry term. As I say, we’re not cyber experts at all. But when we look at this thing from the use of AI looking for patterns in the code in the background of an email or website, we talk about programmatic AI. Right? So let’s understand some of the issues around that, because clearly, if it works flawlessly, no phishing emails will get through. All bad websites will be detected. So clearly, there is some challenge here. So what are the holes that are being exploited? Why does it sometimes not work? I guess is the question.
[00:20:31.350] – Alessandro Prest
Traditional AI techniques for detecting these type of attacks sort of operate at one level of indirection compared to what the user sees. And again, this level of interaction is the code that is then interpreted by the browser, for example, to generate the page. So working at this level as a number gives the attacker a number of advantages in terms of formulating a specific code to generate a page in a different way in an unusual way that the system might not be able to detect. And so AI is very important in trying to analyze that level, the code level, because attackers are always changing their tactics and therefore having more computing power available to detect this permutations is very important. But again, I think we should draw a distinction between operating at the code level and operating at the visual level, because in that case, that level of interaction is removed. So if we have a technology that is able to read into images, then that’s exactly what the user is going to see. There is no interpretation happening between the artifact for the attack and what the user is going to see. So these techniques, the Visual-AI versus the traditional, as you call it, programmatic AI, they do operate at different levels. And so I think it’s an important distinction to draw when somebody is evaluating or is looking at protecting their system with cybersecurity software and undefeated software.
[00:22:31.650] – Franco De Bonis
So going back to one of the points Budgie made earlier about what AI does, it looks for patterns, it looks for different. It effectively uses data to connect different data points to identify things. What I’m hearing from you is that this is just basically another bunch of signals.
[00:22:53.010] – Alessandro Prest
Exactly. It’s another set of signals which are complementary to the signals which are extracted at the code level. So by no means I think that Visual-AI is going to replace the traditional techniques for anti phishing, but I do believe that they’re absolutely complimentary because they look at a different signal, look at the different dimension of the attack, and it’s important that both are covered in order to prevent as many taxes as possible.
[00:23:32.300] – Dr. Budgie Dhanda
I do think that’s an important point. It’s not a replacement. It’s another indicator of confidence in what is coming through or being blocked. Ultimately, you have to tune the systems and there will always be some in the middle and you quite often see a message that this could be spam or this looks like spam, or this could be a phishing attack. It’s just a warning level, but if it can block stuff, great. But if not, even if it’s giving it a bit more warning for somebody that you may want to have a closer look at this before you click on a link, or you may want to put it in a sandbox to check it before it goes through to the user. I think that’s all valuable and helps reduce the threat to individuals and organizations.
[00:24:11.950] – Franco De Bonis
Does it help? Does it help break through the noise? Again, I’m asking you from someone at that end, any technology that helps say, okay, instead of just a warning, we can to some degree definitely tell you that this needs to be blocked. I hear about warning overload. Is that something.
[00:24:35.910] – Dr. Budgie Dhanda
That’s the reality for anyone that’s fighting against all these attacks? The systems end up being tuned and tuned and tuned to try and reduce the false positives and the false negatives as much as you can. But you will always have some that are in the middle, particularly when you’re looking at using something like AI, because there’s no black and white. If you’re lucky, you’ll have some, which are you know, absolutely, this is something we don’t want to let through, and that’s great. But it’s the ones which are in the middle, and that’s where the value of AI comes in, where it’s helping to make that judgment call for you. As you call them, the programmatic types interventions, they’re pretty good for the most part. There are always variations and it doesn’t matter whether it’s in how phishing email structured or what the payload that may come with it. It may be a perfectly legitimate email, but the attachment that you may have with it may be infected. So there are always things which are going to have to have different types of treatment to them. What we’re looking for here is yet another indicator which helps us make those decisions. And whether it’s an automated decision or whether it’s a manual decision one way or another, if it’s another indicator to give us a confidence that helps.
[00:25:50.670] – Franco De Bonis
Brilliant. Okay, very interesting. So, Ale, you are a very smart guy. I’ve worked with you a while, but I know that we weren’t sitting there as a company looking at cybersecurity as an area that we could apply Visual-AI / computer vision to as a, I don’t think it was even on our radar as a problem that we could get involved in. So how did we end up getting involved in this?
[00:26:20.650] – Alessandro Prest
Yeah, I wish I could say that this came out of a meeting and we had this brilliant idea to go in this market, but unfortunately, that’s not the case. Like in many other verticals that we serve, we effectively got dragged into it. It’s a bit like really being called in the middle of the night. There is this problem. I see you guys do visual AI. Can you help us solve it? And that’s effectively the way it went down, especially not being, as you said, correctly, not being cyber security expert. It’s still interesting to see, though, how our technology was extremely effective on this problem, even though it wasn’t built for that purpose. But again, the visual element of any industry problem in a way tends to normalize things. And also we could see here how the industry, the cybersecurity industry, evolved over decades, deploying more and more sophisticated ways to work and detect this type of attacks as code level. Now, we’re relatively new to this challenge, but could provide right away an extremely effective solution for addressing the graphical attack surface. So it’s always fascinating to see how the things work out in practice.
[00:28:13.790] – Declan McGonigle
And Declan, you’ve kind of been working very closely with companies as this has evolved, because we started off literally just doing logo detection, right. And where did it progress from there? Because now we’re using the whole suite. So why is that?
[00:28:32.330] – Declan McGonigle
The main thing that really we’ve seen in working with some of the partners is AI is a very broad term. Even Visual-AI is a broad term and it means different things to different people. So I think the engagements with our partners in this space has shown them as we’ve started, like you said, looking for logos first, then looking for kind of favicons, then looking for text, then looking for login pages and passwords requests, et cetera. They’ve realized that the kind of breadth of what we can do and the number of visual signals that can be picked up is a lot broader than it was kind of initially assumed. And we’ve seen a case where partners of ours have started working on one piece, maybe on the logo piece, but then not really been able to address all of the others and kind of having a whole suite of people who are expert in it has proven to be something that has helped contribute to people being successful. I think you made a very good point. We always approached to say that we’re not cybersecurity experts, but what we are as Visual-AI experts, and what we’re looking to do is to assist people to enhance their threat scores. We don’t work with end users. We don’t work with anybody except the companies that are providing these services and trying to work in parallel with them to assist and enhance those threats for us and increase their track detection rates.
[00:29:53.430] – Franco De Bonis
Great. I’m going to ask you this question, Ale, and I don’t mean it to be like some kind of opening into a sales thing, because that’s not what this is about. But the companies that approached us could have gone multiple routes. There are various models and libraries and APIs available, and potentially they could have built it in house and so on. So what are the challenges that meant they sought us out as a provider, as a fixer of this problem?
[00:30:31.720] – Alessandro Prest
We are definitely in an era of democratization of AI. So right now, even with relatively small teams of engineers, it’s possible to build AI models in house, that I think we’re at a stage where, let’s say you could build a convincing demo for the stakeholders. Now going from that convincing demo that you built the team of four people over a couple of months, going from there to a production system that is analyzing millions of potential attacks on a daily basis, and doing that with the highest level of precision and recall, meaning very low false positive and false negative errors. It’s a different story. So in a way, another interesting aspect that I wanted to point out is this is an industry, the cybersecurity industry, in a way unique in a sense that it’s an industry where good enough is not good enough, where companies are striving to provide the highest level of protection and the highest rate of detection of this type of attacks, because attackers need to be lucky just once. So the attackers have the advantage of being able to replicate these attacks over and over, always with different permutations of their code or of the images.
[00:32:12.650] – Alessandro Prest
Therefore, the highest level of accuracy is required when deploying cybersecurity software. So that’s another, I think, aspect that calls for a very high grade of software being deployed. And we have a reputation in the market for being a high quality supplier of visual AI technology. So I think these two aspects together made us very successful in this industry.
[00:32:54.210] – Franco De Bonis
And that’s, I guess, Budgie again, from your industry perspective, entrenched in the industry, you’ve got this whole thing build versus buy versus partnerships. We’re seeing a lot of partnerships going on now, approaching challenges like this. Is there like a standard approach that companies take? Do they prefer to build or do they prefer to buy or what’s the story?
[00:33:21.900] – Dr. Budgie Dhanda
No, it’s very much horses for courses, and it depends on what sort of sector you’re serving, what your business model is. And there’s two sides to this. The commercial advantage piece. If you’re a buyer of these systems, anti malware, whatever it is, SEAM tools, you want something that’s going to keep up to date with threat. So you’re looking to see not only what sort of coverage are the products giving, but how quickly they evolve to respond to the upcoming threat. So if there’s a new zero day that’s identified now, are you going to take one day to respond to it, one week to respond to it, one month to respond to it. So as somebody who’s buying services, that’s the sort of thing I look for in the companies now in the companies themselves are producing the products and services to try and defend against these attack vectors. They’ve got to make a decision. Am I big enough? Is the problem big enough? That means I’ve got to have a full time team doing this. And if you work out that you need ten or 20 people to do this, then it probably makes sense to go and build that team.
[00:34:39.390] – Dr. Budgie Dhanda
But that’s not something you’re going to do quickly, and it’s not easy to do because these are skills that are in high demand at the moment. Everybody is after data scientists and AI specialists. So you’re going to have to make a really strong business case, and you’re going to have to figure out how to evolve that team. So you’re going to have to buy some top end talent, then grow talent underneath that, or you go and buy a company or buy it by a team. So that’s one way of doing it for others. It’s an important issue, but I probably don’t need somebody on a full time basis for three years, 165 days of the year. I don’t need ten people doing that. So it may be a case of doing a hybrid of actually, I’ll have some people in house and I’ll contract out some of this capability, or it may actually be us buy this in as a service, because somebody who’s doing this as a service will actually be dealing with multiple clients anyway. They’ll see more threat vectors in the first place, so they’ll probably have a more rounded product because they’re developing something that’s actually looking at a lot more data points. So it’s horses for courses, and everyone make buy decisions as they’re going through, and sometimes it’s a hybrid.
[00:35:47.310] – Franco De Bonis
Right. But from what you’ve said there, it sounds like speed to market in addressing these issues is a critical consideration.
[00:35:57.090] – Dr. Budgie Dhanda
Yeah. Nobody wants to buy a piece of technology that’s not able to keep up to date what the threats are. If I know that there’s a new attack out there today, I don’t want to be waiting a week for my provider to come up with a solution to protect me from that.
[00:36:12.990] – Franco De Bonis
Right? Absolutely. So it’s that tight? I mean, it’s even a week.
[00:36:17.370] – Declan McGonigle
It’s less than a week. But normally you want something within hours, a day at the most.
[00:36:24.330] – Franco De Bonis
[00:36:26.550] – Dr. Budgie Dhanda
Think about it. Once something like that comes out and it leaks onto the Dark Web and people know about it, but the adversaries are going to exploit that as fast as possible. They know there’s a window of opportunity. They want to get it out there as fast as possible, as many systems as possible, and see what happens again.
[00:36:46.950] – Franco De Bonis
Okay, very good. So we know some companies are already implementing computer vision in their detection stacks, but we’ve also identified because we do it differently, Ale. So I’d like you to explain what this difference is. We know that there are challenges in doing it that way. So what I mean by applying computer vision programmatically is that, again, you look at the code and then you look at where the images are and you analyze those images, but there are challenges with that as well. Right. So it’s not just a simple solution. What do we do differently? And I don’t mean this again, I don’t want this to be kind of a sales pitch, but we’re happy to share this methodology because at the end of the day, we know our API is great, but the methodology is the important thing. So what is it that is important about our methodology?
[00:37:41.470] – Alessandro Prest
Yeah, I think it boils down to that kind of normalization that we talked about before. So given a machine code that will translate into a Web page or an email, the key differentiator for our approach would be to take that code and render it as if it was being presented to a user. So we’re effectively playing we are putting ourselves in the shoes of the user, creating a visual rendition of what that code is, and then applying our visual AI on that rendition. So we are playing as close as we can. Effectively. Exactly like a user would do when he’s receiving this malicious code. So we call this our three step render process report, where the render part is exactly turning that machine code into what is meant to be seen by the user process means applying our Visual-AI technology to extract the different signals from this image. So is there a company logo in this image? Is there text that hints at inserting password, email, any personal identifiable information? Is there a submit button? Is there a login form on this page, everything, again from a visual perspective. So without using the code and then combining the signals to establish a threat score.
[00:39:37.740] – Alessandro Prest
So how dangerous this page is, provided that the user would follow through with it. And that’s the reporting part where making that assessment of how that visual rendition, how much is dangerous from a user perspective. And again, linked to things like does it look like it’s trying to spoof a specific brand? Is it asking for personal information? Is there any submit some input functionality into this page. So that in a nutshell, our approach for dealing with the graphical attack.
[00:40:24.050] – Franco De Bonis
[00:41:00.520] – Alessandro Prest
Absolutely. We’re operating at that level at which you need to be believable and you need to be credible because that’s what the user is eventually going to see. So that’s exactly the level at which we’re operating.
[00:41:16.290] – Franco De Bonis
Really good. Fantastic. Okay, I think that’s clear. So I guess the million dollar question, Ale, and I don’t know if you can answer this if confidentiality allows or not, but I know you work very closely here, and Declan also, what’s the impact? Are we seeing tangible evidence that this is making a difference to the kind of detections that these companies are looking for?
[00:41:46.030] – Alessandro Prest
Yeah, I mean, graphical attack vectors. We’ve seen how our technologies deployed in conjunction with traditional techniques would lead to double digit percentage increase in detection. But that, in my opinion, not exactly the wrong number is not the point. I think the point is this industry does require the highest standard of accuracy and therefore moving the needle from 80% to 90% means a lot, and it means even more when you move it from 90 to 99 because there’s just as we said before, you just need to be lucky once as an attacker. So it’s an industry which is very responsive to being able to bridge any sort of gap in the current detection capability. Again, I think why this industry is so different than why it really makes a substantial difference to deploy the highest quality Visual-AI you have on the market to plug these gaps.
[00:43:16.570] – Franco De Bonis
It goes back to Budgie’s point of the 1%, right. If they’re looking to increase by 1% because it makes such a difference, if you can decrease it by 1% or more preferably, then that has a huge impact downstream, have I encapsulated that right, Budgie?
[00:43:32.290] – Alessandro Prest
[00:43:33.830] – Dr. Budgie Dhanda
Yeah, I think you’re right. I think the difficulty here is what sort of metrics are you using to try and judge how effective something is? As you’re saying, I only need one successful attempt to get through. The reality is we’re never going to get to 100% protection. And by even saying things like, I’ve stopped 10 million emails today is meaningless because the volumes are going up and up and up anyway. How many different types of email or attack have you stopped? How quickly have you responded to the change in the types of attack that I’m going through, which is probably more important to me than the actual number that you’ve stopped?
[00:44:16.190] – Franco De Bonis
That’s a great perspective. Yeah.
[00:44:20.670] – Dr. Budgie Dhanda
One getting through means that your whole network could be infected.
[00:44:28.390] – Franco De Bonis
Well, okay, so we’ve talked about we’ve covered a lot of stuff. So I’m going to ask, I guess, a really interesting question in terms of you’re at the forefront of the whole issue of cybersecurity. I don’t know if you can share with us how far ahead you look, especially whether it be governmental or really high levels of commercial research in this area. What’s the next thing or how far are you looking ahead?
[00:45:05.450] – Dr. Budgie Dhanda
That’s a very good question. I think we need to be slightly careful on how we answer this question because there’s a lot of research going on in this area, and companies and governments as well spend a fortune looking at future gazing at what the next threat vector could be. It’s difficult to talk about some of that because exposing what the next line of attack could be, making more people aware of that just flags up to some of the people on the other side that actually, oh, I thought maybe I need to go away exploiting this. But what I can assure you is that there is an awful lot of time and money being spent on this from many different organizations. There are some really imaginative techniques being developed, but on both sides.
[00:45:56.910] – Franco De Bonis
And then democratize through the dark web, which is, I think that’s probably the worst aspect of it. You’ve got one clever person coming up with something and then a million people taking advantage of it for relatively little money.
[00:46:10.810] – Dr. Budgie Dhanda
Yeah. And a lot of people looking at vulnerabilities work on the right side. There’s bug bounty type programs and lots of people finding zero days, and they’re reported to the right authorities. So people know way to make defenses for them on the other side of it. If you can think of another way of attacking a system, you want to build your defenses, but you don’t want to start telling people what they are because people will then go away and start thinking about actually, can I subvert that? Or actually, not everyone will have the latest defenses. So I’ll go and attack the lowest hanging fruit. So, yeah, you need to be very careful about what you communicate in this space.
[00:46:49.530] – Franco De Bonis
Noted. Okay, very good. So listen, gentlemen, thank you so much for your time. I guess I’ll round it all up by saying there’s no question attacks that use graphics and imagery are here to stay. And both victim organizations and spoof brands, as we heard initially from Declan, these stats are very concerning are worried about it how computer vision or Visual-AI is implemented to tackle this is key, not just programmatically, but used it in an innovative way. And we can see that when implemented properly, it can make a critical difference to detection rates and to Budgie’s point, broadening the types of detections that happen. And this is only the beginning, I guess, on the bad actors to do list of development. They’re always finding new things and new ways. And I think this approach that Ale talked about, which is to look at things visually rather than just programmatically, is critical. Adding these visual layers on top adds massive value.
[00:48:07.010] – Franco De Bonis
So we hope you’ve enjoyed the episode. Thanks so much, Budgie, for joining us. It’s really eye-opening some of these insights and obviously also to our very own Alessandro Prest and Declan McGonigle for your thoughts and insights there as well.
[00:48:23.610] – Franco De Bonis
So, see you next time. Bye where we’ll be discussing brand protection and product authentication using AI. Thank you.