Facial recognition technology promises to provide an additional source of information about consumers to business and government. However, its use has raised both legal and ethical concerns. Privacy regulators, including the OAIC, have recently commenced investigations into the use of facial recognition technology.
Organisations and agencies considering the use of facial recognition technology should be cautious and consider both legal compliance and community expectations before adopting it.
What is facial recognition technology?
Facial recognition is a multi-stage system that, typically, processes a digital image by algorithmically isolating the subject’s face, extracting its topologies (with particular emphasis on prominent features), and ‘mapping’ those geometries against a database of faces.
How is facial recognition currently used?
Facial recognition technology’s role in biometric authentication (for example, Apple’s Face ID identity validation system) is currently its most well-known application. However, the technology is being used in many other ways across both the public and private domains.
It has been employed in police checks in Australia and overseas, to validate the identities of travellers entering the country (eg at 'smart gates' used at airport arrivals and departures terminals) and to manage access to secure locations (eg Perth-based Tecsec’s products can alert security to trespassers, banned persons, and underage patrons).
It is being used in healthcare to validate medication adherence and reduce human error, to detect the expression of certain genetic diseases, and to measure the level of pain experienced by patients.
It is also being employed to track consumers ‘offline’ (including their interest in different products and their route through physical stores), identify behavioural patterns in relation to social media accounts, and determine customer satisfaction (for example, 7-Eleven recently announced that it is deploying facial recognition technology in 700 stores Australia-wide to authenticate customer feedback).
Key ethical and business considerations
The controversy surrounding Duke University’s PULSE model, which, given a pixelated image of a face, generates a higher resolution imputation of the image, is illustrative of some key ethical considerations for business.
Duke's own analyses of the model suggested accurate performance, but a number of prominent AI ethicists, including Google Ethical AI lead, Timnit Gebru, noted the model's poor performance with faces of colour, whereby the model would impute a ‘whiter’ face. For example, when run over a pixelated image of Barack Obama's face, the model returned an image of a white male. As Facebook's Chief AI scientist Yann LeCun noted, this was the partly the result of training the model on an unrepresentative – mostly white – dataset. Similar results have been exhibited by other facial recognition systems published by Amazon, Microsoft and IBM, which had error rates of 0% for white males but around 30% for black females.
If organisations are considering using facial recognition technology, in addition to testing a model’s performance over a whole population, accuracy should be assessed with regards to permutations of certain subpopulations, (eg people of Asian origin, women, and women of Asian origin). This might mitigate against problems when such technologies are commercially applied, such as (for example) the inaccurate determination of alertness of non-white truck drivers, or inaccurate description of the purchasing demographics of a given retail item.
Finally, it is important for organisations to determine if the use of such systems exposes them to a risk of being seen to create or reinforce biases. Google recently announced, to some praise, that it would be removing inaccurate gender labelling from its tools. Such measures are a relatively simple way of ensuring that facial recognition software and AI processes comply with the values of the organisation and reduce the risk that the organisation is perceived as engaging in undue surveillance or other poor privacy or data governance practices.
Case Study: Clearview AI
On 9 July, 2020 the OAIC and ICO announced a joint investigation into New York-based Clearview AI Inc. The company had controversially ‘scraped’ more than 3 billion photos from the internet (collected publicly-available images of identifiable individuals, such as from Facebook) to populate a database, without their subjects’ consent. Users of the software can upload photographs containing faces that they wish to identify and receive identifying details, including social media accounts. On 14 July 2020 documents revealed that the Australian Federal Police has used Clearview’s facial recognition technology.
In Australia, the Australian Privacy Principles (APPs) govern rights and obligations surrounding the collection, use and sharing of personal information. Broadly, these principles govern when (ie if reasonably necessary and connected to the business), and how (ie given clear disclosure and consent) personal information such as images containing customers’ faces, should be collected, stored and destroyed. The use of biometrics to identify individuals means this type of personal information constitutes 'sensitive' information under the Privacy Act, which is afforded a higher level of protection, and can only be collected with the individual's consent. Until it is clear how the APPs and privacy laws in general intersect with emerging technologies such as facial recognition software, many organisations both within and outside of Australia, have called for a moratorium on its use, with San Francisco recently banning the technology altogether.
Despite legal uncertainties, the joint investigation into Clearview AI signals a willingness on the part of Australian authorities to investigate those who misuse sensitive personal information, including organisations who employ or sell technologies that may facilitate such misuse.
While a private entity may well be lawfully permitted to collect certain data from its customers and stakeholders, the more pertinent question is whether they should do so, including whether this constitutes a sound business decision. The Clearview and PULSE cases illustrate the importance for organisations to understand not just the privacy and other legal implications of employing new technologies, but the ethical and supply chain implications of doing so.
Government and business can expect increased regulator and community scrutiny of facial recognition technology. Before adopting this technology, we recommend you undertake a privacy impact assessment that considers both privacy law compliance and alignment with evolving community expectations.