How is facial recognition technology used in Canada’s public sector?

Facial recognition technology (FRT) is increasingly being used by entities in Canada’s public sector. This includes law enforcement agencies, immigration and licensing authorities, school boards, and casinos, to assist in criminal investigations, validate individual’s identity, and to combat fraud. Identity verification, via facial recognition technologies, is becoming the norm and not the exception.

FRT poses serious risks of individual and social harm due to its proven inaccuracies, issues related to bias, intrusive nature, and its ability to enable widespread surveillance. Currently, Canada lacks a clear and comprehensive legislative framework to govern the use of FRT, which if left unchecked, will further erode individuals’ civil liberties and human rights. Additionally, unregulated FRT poses serious risks to standards of transparency and accountability across the public sector. To demonstrate some of the noted concerns, issues have been identified in the following areas.

Case Studies

Border Control and Immigration

FRT is currently used by government agencies to manage the flow of individuals crossing Canada’s border in the interest of national security. Primary inspection kiosks with FRT scanners have been widely deployed at airports in Canada to examine travellers as they depart and enter the country. Even before arriving at the border, Immigration, Refugees and Citizenship Canada uses FRT to predetermine the admissibility of potential permanent and temporary residents entering Canada through identity validation.

Identity Documents

Many provincial and federal government authorities require individuals to submit personal information for FRT analysis as a requirement for acquiring and revalidating identification documents, such as passports and driver licenses. Individual’s personal information, including facial images and other details such as age, sex, and gender, are then stored in mass data repositories for future reference. In this context, FRT is used to identify individuals who may apply for multiple documents under different names to combat identity theft and fraud.

Police Agencies

Police agencies across Canada are increasingly using FRT analysis in their operations, including criminal investigations. Some examples include police accessing driver’s licence databases and using photographs taken by members of the community to identify potential suspects. Police in Toronto, Calgary, and Ottawa have tested or used NeoFace Reveal, an FRT software that uses images from mugshot databases to conduct comparisons against individuals linked to investigations. Further, FRT has been used in child sexual exploitation and human trafficking investigations.

In 2021, a joint investigation conducted by the Privacy Commissioner of Canada and its provincial counterparts, revealed that the Royal Canadian Mounted Police (RCMP) contravened Canada’s privacy legislation through their use of Clearview AI’s controversial FRT software. A subsequent investigation found that Clearview AI violated federal and provincial private-sector privacy laws by illegally scraping billions images of individuals from the internet without their consent. The RCMP’s use of Clearview’s software effectively amounted to mass surveillance and resulted in millions of Canadians being subject to a “24/7” police line-up.

Schools

The COVID-19 pandemic led many post-secondary schools to conduct their exams online. To confirm students’ identity and monitor students’ actions during online assessments, many institutions subscribed to exam proctoring companies that used FRT to authenticate students’ identity via live-video feeds during examinations. Worryingly, investigations revealed students’ personal information was then used for an unauthorised secondary purpose: to improve companies’ artificial intelligence tools, without obtaining necessary consent from students.

Casinos

Casinos in Ontario have used FRT since the early 2000s to identify known or suspected fraudulent behaviors. More recently, voluntary self-exclusion programs, underpinned by FRT software, were adopted by Ontario and British Columbia to facilitate the removal of individuals with gambling addictions if they were identified when entering a casino. However, in these context, the FRT systems deployed have proved to be unreliable and have not successfully identified all individuals who participated in the programs.

Why should we be concerned about the use of facial recognition technology in the public sector?

The use of FRT in Canada’s public sector can contribute to individual, collective, and social harm through mass surveillance and monitoring. While some of the uses of FRT detailed above may have potentially socially beneficial impacts, nonetheless, a comprehensive regulatory framework is required to ensure that this technology is used responsibly, is properly disclosed to individuals, and that consent is obtained prior to its use. Moreover, rules regarding the use and storage of information gathered and developed in relation to FRT are necessary so that individuals’ privacy and personal information are protected.

“Like all technologies, FRT can, if used responsibly, offer significant benefits to society. However, it can also be extremely intrusive, enable widespread surveillance,
provide biased results and erode human rights, including the right to participate freely, without surveillance, in democratic life.”

Daniel Therrien, Then Privacy Commissioner of Canada
Testimony before the ETHI Committee

The adoption of facial recognition technology in Canada’s public sector, as detailed above, raises some of the following concerns:

Risk to Rights

FRT as a monitoring and identification tool raises the risk of infringing on other fundamental rights closely linked to equality and privacy, including an individual’s right to freedom of association and assembly, freedom of expression, freedom from discrimination, the right to be free from unreasonable search and seizure, the right to be presumed innocent, and the right to liberty and security. Submitting to FRT analysis, as a requirement for obtaining identification documents and other essential services, raises the issue of meaningful consent when individuals are not offered alternative means of identity verification.

Racial Bias & Equity

Many FRT models have been proven to be biased, in part because there is a notable lack of demographic diversity in the images used to train FRT software’s’ learning algorithms. Accordingly, FRT is better trained to identify individuals with White and/or White passing features. Studies have demonstrated that FRT systems have varying levels of effectiveness and are less successful at identifying members of equity-deserving communities including Indigenous, Black, and racialized individuals, as well as women, making such systems inaccurate for real-world use. Additionally, once these misidentification issues are solved, FRT use still poses serious harm because it consolidates and perfects surveillance, leading to even greater privacy risks and inequity.

Accuracy

Many other factors impact the accuracy of FRT systems, including its algorithmic design, training datasets, image quality, and environmental factors including, lighting and weather. There are no industry wide standards regulating the confidence threshold for matches made by FRT systems which increases the risk of false positive identifications when low thresholds are used. The real-life implications of false identifications are heightened when FRT analysis is used in immigration contexts or by law enforcement during the course of a criminal investigation. In such scenarios, misidentification can lead to false arrest and detention, and to individuals’ refugee and immigration applications being denied, resulting in their expulsion from Canada.

Function Creep

FRT gives rise to function creep, whereby information that is collected for one purpose, is used for another. This practice is particularly concerning in high-risk environments, including criminal investigations, surveillance activities, and at the border. For example, in Canada sensitive images of individuals’ faces that were initially taken for one purpose, such as for a driver’s license or by another member of the community, are being used in the context of criminal investigations.

Chilling Effects

Digital surveillance facilitated through invasive emerging technology including artificial intelligence, machine learning, facial recognition analysis gives rise to “chilling effects.” Chilling effects occur when individuals are deterred or discouraged from speaking and acting freely, and adapt their behaviours to meet social norms or engage in self-censorship. In turn, this negatively impacts individuals’ civil liberties including their right to privacy, freedom of speech, and other fundamental rights and freedoms. For example, increased surveillance on protesters, specifically those from racialized communities such as Black Lives Matter, can impact individuals’ willingness to take part in such causes.

CCLA acknowledges the support of Microsoft Canada which enables us to provide administrative support for the Coalition.