Following reports on a CHOICE investigation on the use of facial recognition technology by Australian retailers, it’s time for the market to push for the responsible application of biometric programs.
A group of major Australian retailers were thrust into the spotlight this week, when news outlets reported they were using facial recognition technology on customers. A recent investigation by CHOICE had revealed the companies were “capturing the biometric data of their customers”. And while this information is publicly available in their online privacy policies (as well as discreet store signage), 76 percent of shoppers said they didn’t know the technology was used in Australian stores.
Many previous implementations of biometrics have been like the ‘wild west’, with irresponsible applications and a lack of understanding among the public about how and why it’s been used. But it’s time for the industry to drive the responsible use of biometric technology, educate consumers and establish higher levels of trust and accountability.
“It’s the responsibility of the industry and market to get this right,” says Blair Crawford, Daltrey CEO and Co-founder. “The impetus is on biometric vendors to push and promote the responsible use of biometrics and to say no when a customer wants to use the technology in a way that isn’t ethical or morally responsible, for example if an organisation wants to do mass identification for demographic profiling.”
Biometric programs should be built on the basis of consent, where people have to opt in based on a clear understanding of the scope and the value to the person opting in.
Fortunately, there are now numerous standards to guide the use and applications of biometric technology. Any organisation looking to purchase a biometric capability should always be looking for adherence to standards like ISO/IEC 24745:2022 – which defines the principles of confidentiality, integrity and privacy protection of biometric information to make the use of biometrics safer – in their consultant specifications and vendor responses.
It’s also important to delineate between the applications of any technology. For example, law enforcement using biometrics to identify people who’ve been confirmed as a threat to public safety versus biometrics for general mass identification without consent.
“A responsible biometrics program is initiated on a consent basis when the scope and context is clearly communicated to the user for them to opt in,” Crawford stresses.
Technology in itself isn’t good or bad, it’s how it’s applied. And there are plenty of examples of biometric technology used in a responsible and ethical way – like verifying people accessing disaster relief payments during the bushfires and floods when they’d lost all their identity documents.
For more information about the responsible use of biometric digital identity technology, contact us today.