How worried should we be about deepfakes?
Ann-Kathrin Freiberg, Manager Business Development and Marketing at BioID, pulls back the curtain on some of the most common misconceptions around biometrics and explains why new attack vectors should be managed, not feared.
What are the biggest misconceptions about biometrics that you see vendors trying to overcome?
To start with, deepfakes are just another type of video from a ‘liveness detection’ point of view. It’s a video of someone who might look like you, but it’s still just another video. The other thing is that there’s a lot of fear that companies want to create in order to foster scepticism about online processes. But it’s the task of companies, biometric vendors and solution providers to make identity verification really secure, and there’s a lot of different steps to do so.
A very common misconception is that biometric automated systems are less secure than human interactions. But humans also make mistakes. For humans, it’s difficult to detect fraud and deepfakes, for instance. There are studies that show a ‘tiring effect’. So a border control worker is used to seeing that the person in front of them is the same as the one on the ID. Over time, they just accept the person is the same because they’re used to it. That’s what happens to humans, but it doesn’t happen to machines.
It’s important to see the overall picture that automation is the way to go because of digitisation. COVID-19 has shown us that everything is going online. We want to make our processes really secure, no doubt, but there’s no reason to freak out because of a new type of attack. It’s more about reacting to it and making it as secure as possible.
What are the different types of attacks we’re likely to see on biometric systems?
There are actually two different levels of attacks, but I will start with the one that BioID has its focus on: liveness detection, which is also called presentation attack detection (PAD). This name actually says what it’s supposed to do. It’s supposed to prevent presentation attacks – so, attacks presented to a sensor, like a camera. Those include things like videos presented to the camera, 3D masks, paper masks, you name it. There are so many different types of attacks that fraudsters come up with. They all need to be prevented through the algorithm because you want to make sure the person is actually in front of the camera in that moment.
There’s an international standard for PAD, ISO 30701. This ISO is really important in our industry, and you can make sure that companies are compliant with it. It doesn’t really name deepfakes as a type of attack because, as I said, deepfakes are just another type of video. So it doesn’t matter if you present a video to a biometric system by presenting it to the camera or if you present a deepfake video. From our algorithm’s point of view, we would reject it because we see it’s a video – whether it’s a deepfake or a real video.
There is also another level of attack that you need to consider, especially if you have an end-user application, such as an identity verification process for opening a bank account. Say you’re the identity provider and you have a full process that checks the ID’s authenticity – checking that the person on the ID is the same as the one in front of the camera, and that the person is actually there, which is the liveness detection part. If you have a full application like that, you also need to secure it from virtual camera attacks. You want to make sure no one can inject a video into the application. Again, it doesn’t matter if it’s a real video or a deepfake, because this would probably cause an issue if it’s injected into the application. If it looks like you and it moves like you, then most systems would say that is you.
So those are two levels of attacks that must be considered for creating secure applications and secure identity verification. We work closely with our partners to make sure both layers are secured.
Have any of your customers experienced a deepfake attack?
Yes they have, especially because this type of attack is growing. But it’s still at a level where the quality isn’t very good. What we normally see is that it’s the application level being attacked. If it’s a video that we see presented to a camera, then the liveness detection would catch a deepfake. But if the application hasn’t gotten enough attention at that point – and virtual camera attacks are easily possible – then that’s where our customers would contact us and ask us about it. Then we explain the different types of attacks and how they can secure their applications against them.
How do you help them mitigate those threats?
The best way is to use native apps. It’s not so easy to inject a video into a native app. It’s actually not really possible on iPhones. I’m not sure about Android, but it’s much more difficult than to do it on Windows. So, one thing is using native apps.
Many customers tell us, “Well, we still need our web applications.” So then what you can do is blacklist the most common virtual cameras. This is an easy way, and it’s definitely one that makes it more difficult for a standard fraudster.
On an algorithm level, we could use something called ‘challenge response’. It means that you ask the user to perform a certain challenge. This is part of active liveness detection, and here the benefit is that pre-recorded videos don’t have any chance to complete the challenge. If you ask the user to move their head up and then to the side, for instance, they probably don’t have a pre-recorded video or deepfake of the person that can exactly follow those movements.
This is optional, as our standard liveness detection doesn’t require any challenges. But if you want to have a higher level of security, then challenge response is definitely something you can do.
What do you think is going to happen with adoption and liveness generally?
There’s no end game. There are always new ways to spoof systems, and it’s our task to make that as difficult as possible. So it’s more like a game between the fraudsters and non-fraudsters. The good ones and the bad ones. It’s really important to just stay on top of everything, so ongoing development is one thing.
You cannot buy a solution and then think it will work for the next year. BioID, for instance, updates every four to six weeks. That means we have new releases accounting for new types of attacks and more data in our algorithms every four to six weeks.
A few years ago, you could have bought a biometric technology and believed you could use it for five years – but that’s just not what’s going on. You really need to make sure you get regular updates. You need to have a team that’s dedicated and works on optimising the performance and the fraud prevention.
Want more insight into the world of cybersecurity, digital identity, biometrics and more? Get your fix with the IDentity Today podcast, hosted by Daltrey CEO Blair Crawford. Listen via Apple Podcasts, Spotify or your favourite podcast app.