E-tika podcast – (Un)ethical biometrics
The E-tika podcast focuses on social and ethical dimensions of digital technologies. We discussed many interesting topics with guests from different relevant areas of expertise in the second season. The interviews are in Slovak only, so we decided to bring you an English summary of each episode from the second season.
In this episode of the E-tika podcast, the hosts Juraj Podroužek and Matúš Mesarčík invited Ján Záborský, who works at Innovatrics – a company developing various biometric systems. They spoke about the questions related to the ethical aspects of biometric systems.
Biometric systems have the potential to offer a plethora of everyday benefits. – from securing safe and seamless access to areas such as airports or to our mobile banking apps. However, at the same time certain types of biometric systems such as remote facial recognition used in public spaces are perceived as risky. Some of them increase the worries of data misuse by public authorities or by private companies. Additionally, as it is with many other AI applications, the deployment and use of biometric systems raise the questions of their fairness and transparency.
There are many controversies surrounding biometric systems and especially facial recognition systems that have been resonating in society in recent years. For example, a report conducted by the EU Agency for Fundamental Rights found out that only 27% of Slovak citizens would be willing to share the image of their face for the purpose of identification by public authorities. With private entities, this number is even lower – only 10% of citizens. On top of this, there are various activist movements, such as Reclaim Your Face, whose petition to ban biometric systems has been signed by hundreds of thousands of people. However, the situation is not always so black-and-white. Some research and surveys suggest that the context in which such systems are deployed matters highly and that citizens are also sensitive to the perceived usefulness and benefits of such technologies.
When it comes to AI powered biometric systems, some nuance in the terminology is necessary. For example, the European Commission’s proposal of the Artificial Intelligence Act defines the biometric data as personal data that result from technical processing of physical, physiological, or behavioral characteristics of a natural person. Facial recognition systems, especially remote facial recognition systems – in which the person might not even be aware of such processing and identification – are considered as high-risk AI systems or even banned under some circumstances. Certainly, real-time biometric identification brings up some chilling effects, especially when deployed in contexts such as public gatherings or political protests. On the other hand, biometric verification like in mobile applications should not fall under high-risk category, because these apps verify your identity with the data you have provided before and with your explicit consent.
One of the most resonating and controversial aspects relating to facial recognition systems have been revolving around their low performance or accuracy for some people. For example, Joy Buolamwini, a researcher at the MIT Media Lab, has shown that skin color and gender might affect the accuracy of the facial recognition models and tend to be less accurate when identifying darker female subjects. Therefore it is necessary to pay attention to data on which such models are trained and the context in which they are created. When it comes to accuracy, considering the biometric systems being developed at Innovatrics, they have the advantage of creating such systems also for non-European clients.
However, it remains a priority that accuracy of facial recognition systems is secured through obtaining high quality data on which such models are trained. For this reason significant efforts are put into making sure that users provide accurate and clear photos. The size of the training dataset is also important. However, obtaining large data sets can be quite a challenging task. For this reason, it is necessary to consider how to obtain such data in a manner that is considered ethical and with consent. There are new developments in this area as well, as synthetic (artificially generated) data are used more often, while practices such as web-scraping are being abandoned due to ethical and even legal reasons.
The negative consequences of biometrics’ low accuracy are also manyfold and very sensitive to use cases for which they are deployed. For example, we might be patient when our banking app does not recognize us on the first attempt. But the consequences are way more significant when we are, for example, denied our right to cast a vote in an electoral system that employs facial recognition or verification. This issue is also tightly related to the notion of transparency and explainability of AI systems, which we have previously discussed in the second episode of the second season.
For this reason, there are now efforts to consider the ethical aspects of biometric technologies, especially given the fact that such systems are often developed for multiple purposes which are then adjusted to specific use cases. It is therefore important to embed certain values such as transparency and privacy into the design of biometric systems, so that we mitigate the possibility of these technologies being used for nefarious purposes, for example by undemocratic regimes.