The proliferation of facial authentication systems in recent years has sparked debates among data researchers, law enforcers and privacy evangelists about privacy concerns.
According to estimates, the facial recognition industry in the US will grow to $7 billion in 2024. Applications range across a wide number of industries including retail, healthcare, law enforcement, marketing and financial services.
Privacy activists have criticised this wide use of technology. With access to large databases of faces and other biometric credentials, it exposes ordinary citizens to serious privacy and data breaches.
Facial Recognition and Breach of Privacy
In the case of law enforcement agencies, in particular, the implications of incorrect Face Recognition Algorithm are serious and sometimes irreversible. Ordinary citizens are scanned every day through surveillance cameras and matched with countless stored photographs to identify criminals.
For marketing and retail purposes, the technology helps in following buyer trails and customising campaigns down to the last click. This means search histories, phone galleries and social media profiles are helping track scores of potential customers every day.
Social media giants such as Facebook, in particular, have recently been fined on similar grounds.
At the same time, technology also makes it easier for customers as verifications and on-boarding procedures become smoother and frictionless. Your iPhone X can be unlocked with a facial authentication API and soon you will also be able to make payments using only your face as proof of identity.
Understanding Face Detection Algorithms
Research shows that the accuracy of facial recognition software has improved 20 times in the last few years. However, due to easy access to technology, impostors are also catching up to trick the software into making inaccurate decisions.
Cybercrimes, in the form of unauthorised access to accounts, are making up a significant part of incumbent threats to digital privacy. Every minute, they cost $2.9 million on the internet, with a total of $1.5 trillion due to malicious activities.
Here’s why it happens:
Machine learning algorithms use deep learning techniques such as reinforcement learning to analyse large amounts of data and detect repetitive patterns. Algorithms perform face recognition by identifying data on a highly complex set of facial features, such as the size of eyes, nose and face contours.
Other features include the width of the face, the height of the nose, skin colour, texture and tone. Large resources and computational power is required to carry out these processes, and the results are not accurate 100% of the time.
Regulating the Use of Face Recognition
The upward trend in cybersecurity attacks has led to the demand for advanced solutions that address both accuracy and privacy concerns of AI and face recognition technology.
Lawmakers will need to step in and enforce regulation to mitigate risks around misuse of sensitive customer data. Regulatory authorities that monitor online activity must do so with a keen eye on developments in AI and keep up with advancements in Biometrics Services.
The General Data Protection Regulation (GDPR) of the EU and the California Consumer Privacy Act (CCPA) in the US are two of the most successful regulations in this context. They have enhanced privacy rights and consumer protection by regulating the application of facial technology across industries.
Companies are required to reveal how consumer data is being used (or abused) and keep logs on how certain digital databases were used to further company objectives.
As we proceed to an increasingly digitised future, the scope to limit the use of AI-powered technology will shrink. Therefore, the best approach will be to develop strategies to regulate and monitor its use and be more responsible for its consequences for user privacy.