Available in Spanish here

Facial Recognition Technologies (FRT), from unlocking smartphones to enabling employers to track productivity and facilitating police surveillance at protests, are increasingly woven into the fabric of our daily lives. The development and application of FRT have surged, with projections indicating the global facial recognition market will reach a staggering $12.67 billion by 2028.1 However, this rapid growth is accompanied by escalating risks tied to the casual or excessive reliance on facial recognition software, which can significantly infringe upon individual rights and freedoms.
One prominent issue is the potential for algorithmic bias. Despite some facial recognition algorithms boasting classification accuracy rates exceeding 90%, these outcomes are not evenly distributed across all demographic groups. The lowest accuracy rates are often found among individuals who are female,2 Black, and aged between 18-30 years.3 Another pressing concern is privacy infringement. With the pervasive deployment of FRT, many individuals find themselves subject to surveillance without their knowledge or consent. This can occur in various settings, from public spaces to workplaces, often without explicit consent or awareness. Such widespread surveillance not only invades personal privacy but also fosters an environment of continuous monitoring, which can lead to psychological distress. The unregulated use of facial recognition can also result in wrongful identification and the ensuing legal and social ramifications. There have already been instances of false arrests and accusations due to errors in FRT, undermining individuals’ rights to fair treatment under the law.
The racial and gender implications of facial recognition have been thoroughly examined, highlighting how these technologies can exacerbate and intensify existing biases. A case in point, recently brought to light in an article, involves Harvey Murphy Jr., who is suing Macy’s and the parent company of Sunglass Hut after being erroneously accused of armed robbery, a mistake blamed on faulty facial recognition technology.4 Murphy’s subsequent wrongful detention for nearly two weeks—during which he was assaulted by other inmates, sustaining permanent injuries—illustrates the grave outcomes that can stem from the misuse or overdependence on FRT, leading to physical, economic, psychological, and reputational damage.
Harmful information processing in facial recognition technologies (FRT) encompasses a range of practices that can infringe on privacy, lead to discrimination, and result in other negative outcomes for individuals. Here are some examples illustrating these concerns:
- Racial and Gender Bias: Facial recognition systems have been shown to have higher error rates for women, people of color, the elderly, and children due to the biased datasets on which they were trained. This can lead to misidentification and wrongful accusations, disproportionately affecting marginalized groups.
- Unconsented Data Collection: Collecting and processing facial recognition data without individuals’ informed consent, often done in public spaces or via social media, violates privacy rights and personal autonomy.
- Invasive Surveillance: Governments and corporations use FRT for mass surveillance, tracking individuals’ movements and activities without their consent. This pervasive monitoring is a significant intrusion into personal privacy and can chill freedom of expression and assembly.
- Data Security Vulnerabilities: Storing sensitive facial recognition data without adequate security measures exposes individuals to risks of data breaches. Leaked biometric data is irreplaceable, and its compromise can have lifelong implications for the affected individuals.
- Misuse in Law Enforcement: Law enforcement agencies’ use of facial recognition can lead to wrongful arrests and detentions. There have been documented cases where individuals were mistakenly identified as suspects in crimes they did not commit, based on flawed FRT matches.
- Automated Decision-Making Errors: Decisions made based on FRT, such as eligibility for services, employment, or access to events, can be flawed due to inaccuracies in the technology. These automated decisions can be difficult to appeal, leaving individuals without recourse.
- Profiling and Targeting: The use of FRT for profiling individuals or groups for targeted advertising, political manipulation, or social scoring systems, without transparency or accountability, raises ethical concerns about manipulation and control.
- Social Exclusion and Discrimination: The implementation of FRT can exacerbate social divisions by reinforcing stereotypes and excluding or penalizing individuals based on their appearance. For example, access to places or services might be denied based on biased or incorrect data processing.
- Psychological Impact: The knowledge that one is constantly being watched and analyzed by FRT can have a chilling effect on behavior and freedom of movement, leading to psychological stress and a sense of loss of autonomy.
- Lack of Effective Redress Mechanisms: Individuals affected by harmful information processing through FRT often have limited options for redress due to the opaque nature of the technology and the entities using it. This lack of accountability and transparency makes it challenging for individuals to challenge inaccuracies or misuse.

Regulation of FRT
On December 19, 2023, the Federal Trade Commission announced an action against Rite Aid for unfair practices associated with the use of FRT surveillance system to deter theft in its stores and for violating a prior 2010 Order that required a comprehensive information security program and document retention for vendor management.5 This action is the first time the FTC has taken an enforcement action against a company for using AI in a biased and unfair manner.
The US National Academies of Sciences suggest in a new report that the White House should consider issues a new executive order outlining how federal agencies should use FRT and recommends that the National Institute of Standard and Technology (NIST) outline standards and minimum requirements for FRT depending on the application area.6 The report highlights areas of concern when FRT is applied broadly and without safeguards, allowing the technology “to create detailed records of people’s movement and activities and block targeted individuals from participation in public life.” Not surprisingly, the report concluded that FRT intersects with equity and race in several ways. 7 The authors urge the president to issue an executive order that develops guidelines for federal agencies on “the appropriate use of facial recognition technology” that takes into account “both equity concerns and the protection of privacy and civil liberties.” The report recommends that Congress consider several legislative actions regarding facial recognition technology, including implementing storage limits for facial images and templates, requiring training and certification for operators and decision-makers in fields like law enforcement, enacting a federal privacy law specific to facial recognition, or adopting broader privacy legislation to address commercial practices affecting privacy. Additionally, it suggests addressing concerns related to surveillance, harassment, and blackmail.
Conclusion
In conclusion, while facial recognition technology offers numerous benefits, it’s crucial to balance these advancements with the ethical considerations and potential risks to privacy and human rights. Robust regulation, transparency, and responsible use are key to ensuring these technologies are used in a way that respects individual freedoms and prevents harm.
- Author Unknown, Facial Recognition Market to Reach $12.67 Billion by 2028, Help Net Security (Mar. 10, 2022), https://www.helpnetsecurity.com/2022/03/10/facial-recognition-market-2028/. ↩︎
- J. Buolamwini & T. Gebru, Facial Recognition Is Accurate, if You’re a White Guy, N.Y. Times (Feb. 9, 2018), https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html. ↩︎
- J. Doe, Face Verification Subject to Varying Age, Ethnicity, and Gender Demographics Using Deep Learning, 10 J. Data Mining Genomics & Proteomics 323 (2019), available at https://www.hilarispublisher.com/open-access/face-verification-subject-to-varying-age-ethnicity-and-genderdemographics-using-deep-learning-2155-6180-1000323.pdf. ↩︎
- S. Boucher, Man Jailed, Raped, and Beaten After False Facial Recognition Match, $10M Lawsuit Alleges, Vice (Date Unknown), https://www.vice.com/en/article/3akekk/man-jailed-raped-and-beaten-after-false-facial-recognition-match-dollar10m-lawsuit-alleges. ↩︎
- Federal Trade Commission, Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology Without Adequate Notice or Consent (Dec. 19, 2023), https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without. ↩︎
- Academies Press (2021), available at [https://nap.nationalacademies.org/catalog/27397/facial-recognition-technology-current-capabilities-future-prospects-and-governance](https://nap.nationalacademies.org/catalog/27397/facial-recognition-technology-current-capabilities-future-prospects-and-governance). ↩︎
- National Academies of Sciences, Engineering, and Medicine, Advances in Facial Recognition Technology Have Outpaced Laws, Regulations; New Report Recommends Federal Government Take Action on Privacy, Equity, and Civil Liberties Concerns, National Academies (Jan. 2024), https://www.nationalacademies.org/news/2024/01/advances-in-facial-recognition-technology-have-outpaced-laws-regulations-new-report-recommends-federal-government-take-action-on-privacy-equity-and-civil-liberties-concerns. ↩︎


Leave a comment