Is facial recognition vendor testing The Hunger Games of biometric assessment? Peter Martis, Director of Global Sales at Innovatrics, examines the state of facial recognition technology, how the media and end users are incorrectly assessing it, and why we need to fight to ensure this technology stays in the right hands.
Facial recognition vendor testing has been described as The Hunger Games of biometric assessments. What are your thoughts on that?
Facial biometrics in general – or AI and facial biometrics derived from it – is becoming such a popular product. The market is overflowing with technologies that claim to be always the best, the most accurate, the fastest, the most secure. We know that the whole development of research is led by Chinese, Russian and Americans companies. Some of the Chinese algorithms have been banned globally or at the very least in the Western world, and we are trying to position ourselves in this cut-throat market.
With Innovatrics being a long-term technology company, we are always trying to position our technologies with a certain grain of salt. We want to share the right messaging, claiming the accuracy as it actually is. Whereas all these new startups that are heavily funded, they don’t really care about their reputations because they don’t have any. So they are promising results and performance that are a) unachievable, and b) unrealistic. And they couldn’t possibly have it done in-house, as far as the R&D goes.
So we are trying to survive and sell and compete against these companies that have zero business ethics. They don’t really understand the subject. They just throw a bunch of money and a bunch of R&D guys come up with the algorithm or an engine that they can sell later on. That makes it very difficult for the customers, for the end users, for banks, for all the enterprises to pick the right vendor. If you look at their websites, they all claim the same ultra-secure and fast engine. And as the end user, you don’t really understand what you should go for.
You make an interesting point about some of the algorithms being banned. Why is that happening?
In most cases, they’ve been training their algorithms on data without approval. So they haven’t received consent for the photographs or from the people who have been photographed. Think of it like somebody grabbing all of your Facebook photos and using them for neural-network training. This is essentially what you do with facial recognition. You have to show this neural network training engine millions of photos, of different angles, and then the neural network will eventually learn what makes those faces belong to the same person.
Especially for Chinese companies, they don’t really have any consent. They just get the government database of the people who cross the border – an immigration database, if you will, or a civil database – and they use this huge high-quality database for training, which eventually makes the algorithm really accurate. But they’re using data that people have never given consent for, which is basically an invasion of privacy.
That is one of the more contentious points around the use of biometrics, especially from an ethical and moral standpoint. As a biometric community, how can we collaborate as professionals to combat some of the misuses of the tech and certainly the non-consent you’ve just described?
Thanks for asking this question, because it’s exactly what we’ve been trying to address for the past 12 months. We’ve been addressing something called ‘ethical biometrics’. I believe that we are divided into two groups: the old-fashioned professionals who really care about the reputation of the technology on their own, and then there’s a group that’s made up of heavily funded startups. They just go and roll over everybody to make claims and make sales.
This is confusing the market, but it’s our obligation to stay in the first group – to constantly educate the market and the customers, explaining what’s what. And that’s where we get ourselves to benchmarking and comparing the results, as well as making sure that all the data we are using as a community of professionals is officially available and legally usable for the purposes it’s been approved for by their owners. This is what we’re doing: educating the market and making sure we’re using the valid, ethical and approved datasets for the training.
What’s interesting is this idea that if you have a picture of someone, then you can create a biometric which can be used to represent them during a transaction. That’s what the media might see as a major risk of biometrics. But as we know, that is a flawed concept, because it doesn’t really matter if someone has a picture of you – the company has bigger problems. If a person can take a picture, create some sort of biometric template, then represent that person at the time of the transaction and also inject that liveness and biometric template event into the network as part of an authentication stream, then the company has significantly more problems if all of that can happen off the back of stealing a picture from Facebook. How do we address that misconception?
Let me take this from another angle. What you were essentially saying is that in order to use facial biometrics, all you need to see is a human or someone’s face. Now, we want to make sure that the face is an actual human being, not a photograph that’s been printed out or a presentation on another screen. So we need to be able to distinguish between Blair printed on the paper and Blair looking at me from the webcam on his laptop.
Now, there is technological liveness detection that does that, and obviously this is as important as the facial biometrics itself. As accurate as the facial biometrics might be, if it’s satisfied with a printed photo and passes that as being the real person, then it’s useless. So liveness is very important. It can be more or less convenient for the user. The ultimate liveness is called passive liveness, so you don’t have to do anything for assistance or to verify your liveness. There are also systems that use ‘challenges’ based on liveness, which means you need to cooperate and perform some action like blinking or smiling, which is less convenient and easier to spoof. So there are technologies that can do liveness detection.
But as you mentioned, you need someone’s photo to even start attacking the biometric engine. As much as people may hate biometrics technology because it’s perceived as ‘evil’, they are posting their photos left and right on all the social media platforms, not really caring about what might happen to these photographs, which strikes me as really strange.
Education is the way forward, I believe. People must be cautious about what kind of photos they post and where.
Is it our job, as companies that are in this field, to make sure there are more good use cases than there are bad use cases, so that people get the best understanding of how to use this technology?
Yes, that’s very true. How do we make sure people see the technology with the best possible plans and purpose? We are actually addressing this internally. Our R&D people, our engineers, our employees are always concerned about the potential that they are developing a weapon of mass destruction. So they’re really careful about how we make sure our technology ends up in the right hands, for the right things, for the right reasons.
We are working heavily on our Code of Conduct and researching the policies that our companies have in place, and trying to control how the technology is used, as much as possible. So it’s about doing more good than bad. We don’t want it to end up in the wrong hands.
Want more insight into the world of security, identity access management, biometrics and more? Get your fix with the IDentity Today podcast, hosted by Daltrey MD Blair Crawford. You can start on Episode 1 here or listen via Apple Podcasts, Spotify or your favourite podcast app.