Late News

University of Michigan study advocates ban of facial recognition in schools

A newly published study by University of Michigan researchers shows facial recognition technology in schools presents multiple problems and has limited efficacy. Led by Shobita Parthasarathy, director of the university’s Science, Technology, and Public Policy (STPP) program, the research says the technology isn’t suited for security purposes and can actively promote racial discrimination, normalize surveillance, and erode privacy while institutionalizing inaccuracy and marginalizing non-conforming students.

The study follows the New York legislature’s passage of a moratorium on the use of facial recognition and other forms of biometric identification in schools until 2022. The bill, which came in response to the launch of facial recognition by the Lockport City School District, was among the first in the nation to explicitly regulate or ban use of the technology in schools. That development came after companies including AmazonIBM, and Microsoft halted or ended the sale of facial recognition products in response to the first wave of Black Lives Matter protests in the U.S.

The University of Michigan study — a part of STPP’s Technology Assessment Project — employs an analogical case comparison method to look at previous uses of security technology (CCTV cameras, metal detectors, and biometric technologies) and anticipate the implications of facial recognition. While its conclusions aren’t novel, it takes a strong stance against commercial products it asserts could harm students and educators far more than it helps them.

For instance, the coauthors claim that facial recognition would disproportionately target and discriminate against people of color, particularly Black and Latinx communities. At the same time, they say that facial recognition would create new rules for dress and appearance and punish students who don’t fit into narrow standards of acceptability, causing problems whenever a school relies on it to automate activities like taking attendance or purchasing lunch.

Indeed, countless studies have shown facial recognition to be susceptible to bias. A paper last fall by University of Colorado, Boulder researchers showed that AI from Amazon, Clarifai, Microsoft, and others maintained accuracy rates above 95% for cisgender men and women but misidentified trans men as women 38% of the time. Separate benchmarks of major vendors’ systems by the Gender Shades project and the National Institute of Standards and Technology (NIST) suggest that facial recognition technology exhibits racial and gender bias and facial recognition programs can be wildly inaccurate, misclassifying people upwards of 96% of the time.

Facial recognition will take existing racial biases and make them worse, causing more surveillance and humiliation of Black and brown students, the University of Michigan researchers argue. And it will make surveillance a part of everyday life, laying the groundwork for expansion to other uses. Lockport portends this — while the schools’ privacy policy stated that its facial recognition watchlist wouldn’t include students and would only cover non-students deemed a threat, the district superintendent ultimately oversaw which individuals were added to the system and the school board president couldn’t guarantee student photos would never be included for disciplinary reasons.

The University of Michigan study’s coauthors also maintain that facial recognition in schools will create new kinds of student data that will be sold and bought by private corporations. Data collected for one purpose will be used in other ways, such that it will become impossible for students to provide full and informed consent to data collection or control. A legal remedy to this was proposed last week by Sen. Jeff Merkley (D-OR) and Sen. Bernie Sanders (I-VT) in the National Biometric Information Privacy Act, which would make it illegal for businesses to collect, purchase, or trade biometric information obtained from customers without permission. But few protections exist in most U.S. states as of now.

For these reasons, the researchers recommend a nationwide ban on facial recognition in schools. However, they provide policy recommendations for schools that deem the technology “absolutely necessary.” Among other steps, they propose a five-year moratorium on the use of facial recognition technology in schools; convening a national advisory committee to investigate facial recognition and its implications; establishing technology offices to help schools navigate the technical, social, ethical, and racial challenges of facial recognition; and deleting facial recognition data at the end of each academic year or when students graduate or leave the district.

A number of efforts to use facial recognition systems within schools have been met with resistance from parents, students, alumni, community members, and lawmakers alike. At the college level, a media firestorm erupted after a University of Colorado professor was revealed to have secretly photographed thousands of students, employees, and visitors on public sidewalks for a military anti-terrorism project. University of California San Diego researchers admitted to studying footage of students’ facial expressions to predict engagement levels. And last year, the University of California Los Angeles proposed using facial recognition software for security surveillance as part of a larger campus security policy.

Source link