Skip to main content

Our Mandate


The mandate of the Right2YourFace Coalition is to create a forum for mutual issue sharing and education regarding the ways that facial recognition technology (FRT) may impact our organizations and broader communities. The Right2YourFace Coalition seeks to ensure that there is a wide range of voices—speaking from a variety of perspectives—available to provide expert advice and guidance to policy makers grappling with this complex and dangerous technology. 

Our Objective


The Right2YourFace Coalition’s objective is to establish a working body of diverse stakeholders whose interests and those of their communities will be negatively impacted by the use of facial recognition technology in Canada. The aim of the Coalition is to engage in advocacy activities, including policy submissions, joint letters to policy makers and the public. Ultimately, we seek to make recommendations on draft legislation to regulate the use of FRT and other biometric surveillance technologies to protect individuals’ privacy and human rights.

Facial recognition technology is a type of biometric recognition technology that uses artificial intelligence (AI) algorithms, and other computational tools to ostensibly identify individuals based on their facial features.

It functions by extracting individuals’ biometric information based on key facial features and makes comparisons between live and stored biometric templates. Facial recognition software produces recommended matches when the level of similarity between images exceeds a set confidence threshold.

FRT is being increasingly deployed across a range of industries and sectors in Canada. It is being used to manage Canada’s borders – both to screen and identify travellers upon their entry into Canada and also to confirm or deny immigration applications. Retail companies are using FRT to gather shoppers’ personal information, including information concerning gender, age, and ethnic-origins to develop detailed consumer profiles. Police forces across Canada have incorporated a variety of facial recognition technologies into their policing and surveillance techniques. FRT has even made its way into Canadian classrooms via online proctoring tools that use FRT to identify students.

Despite some potential socially beneficial uses, FRT is a deeply flawed and highly invasive surveillance technology whose known social harms outweigh its potential benefits. Facial recognition technologies often operate with deep algorithmic biases. Moreover, their training datasets often lack diversity and are usually composed of White and/or White passing individuals. As such, studies have proven that FRT struggles to obtain accurate results with regard to Black women and people of colour, resulting in misidentifications. Such occurrences can be particularly harmful when FRT is deployed in high-risk environments including criminal investigations and immigration applications.

Even if FRT could accurately identify people (and the reality is, the technical problems with the technology will eventually be fixed to some degree), it still poses many risks to civil liberties by facilitating mass surveillance, the invasion of privacy, and the over-policing of equity-deserving groups. FRT has or will continue to have disproportionate impacts on various marginalized communities including  people of colour, women, 2SLGBTQ+, elderly people, young people,  individuals with physical disabilities, as well as other equity deserving groups, who are impacted by systemic forms and intersecting systems of oppression.

 

In Canada, there is a lack of specific and comprehensive national legislation to direct its use in the private and public sectors.

 

Current federal and provincial privacy laws are unable to govern the development, deployment, and use of FRT as they have not been updated to face emerging technological challenges. While the law has remained stagnant, FRT is increasingly being adopted across many sectors. Absent legislative direction, courts have reached varying conclusions on issues surrounding its deployment and use by public safety authorities. The Government of Canada must take action to ensure that individuals’ rights are safeguarded vis-à-vis this emerging technology.

Core Values

Equity

Privacy

Transparency

Human agency & democratic oversight

Equity

This group believes in equity in society. When technology is used without adequate legal safeguards, particularly in the context of identity authentication, surveillance, and information gathering, it often works against this goal. Its misuse poses disproportionate harm to equity deserving groups, which further increases inequalities and erodes fairness within our communities.

Privacy

This group believes that privacy is a fundamental right that matters. Biometric recognition technology may be used in ways that are invasive, arbitrary, and irresponsible. Such use erodes individuals’ privacy rights that are integral to a full range of rights in the Canadian Charter of Rights and Freedoms, including freedom of association, freedom of expression, freedom from unreasonable search and seizure, and equality rights.

Transparency

This group believes in transparency . For those biometric recognition purposes deemed to be socially responsible, the use of such technologies must be disclosed and the ways in which individuals’ information is collected and used requires explanation and the opportunity for meaningful and informed consent.

Human agency & democratic oversight

This group believes in human agency and democratic oversight for artificial intelligence systems. AI systems may be plagued by pervasive biases, inconsistencies, inaccuracies, and limited ability to address exceptions that threaten the rights of its subjects. To ensure that individuals’ rights are not subverted, AI governance frameworks must incorporate standards that respect human autonomy and decision-making.