This is an online lesson suitable for college-level or potentially high school courses in Information Science, Computer Science, Science & Technology Studies (STS), Information Technology, Sociology, Criminology, Media Studies, Public Policy, Law, Urban Planning, Ethnic Studies, and Applied Ethics. It includes videos, readings, and suggested discussion questions/assessment. It is suitable for a flipped-classroom model where an in-person or synchronous online session might discuss these videos/readings. This online lesson is a part of the Emergency ESC initiative.
About This Lesson
The lesson consists of a case study of Detroit’s Project Green Light, a new city-wide police surveillance system that involves automated facial recognition, real-time police monitoring, very-high-resolution imagery, cameras indoors on private property, a paid “priority” response system, and other distinctive features.
Update: As of June 16, 2020 local news media are reporting allegations that Project Green Light has been used on crowds during #BLM protests to identify peaceful protesters who are not social distancing and charge them with a violation of the governor’s health order. In Michigan this carries up to a $1,000 fine for a first offense. Protesters also allege that Project Green Light is being used to identify peaceful protesters whose immigration status is in question and pass this information to the US Department of Homeland Security.
Update: As of June 24, 2020 a feature article has been published telling the story of a Detroit man who is thought to be the first American wrongfully arrested based on a flawed match from a facial recognition algorithm. It has been added to the readings.
Video materials include an interview with an expert on the system, an employee in a participating business, and a business owner. We estimate the videos and the two required readings take 75 minutes — this does not include discussion time.
Learning Objectives
- Identify risks of state surveillance systems and automated facial recognition.
- Recognize some relationships between race, inequality, and technology.
Video Segments
Part 1: Introduction — Prof. Christian Sandvig (3 mins)
Part 2: Tawana Petty Interview (36 mins)
Part 3: Bella Interview (8 mins)
Part 4: PJ Interview (12 mins)
Required Readings:
- Harmon, A. (2019, July 8). As Cameras Track Detroit’s Residents, A Debate Ensues Over Racial Bias. The New York Times. https://www.nytimes.com/2019/07/08/us/detroit-facial-recognition-cameras.html (7 minutes)
- Hill, K. (2020, July 24). Wrongfully Accused By an Algorithm. The New York Times. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html (9 minutes)
Suggested Additional Readings:
Arranged in order of increasing difficulty.
- Campbell, Eric T.; Howell, Shea; House, Gloria; & Petty, Tawana (eds.). (2019, August). Special Issue: Detroiters want to be seen, not watched. Riverwise. The Riverwise Collective. https://detroitcommunitytech.org/system/tdf/librarypdfs/2019-2206_Riverwise-Surveillance.pdf?file=1&type=node&id=80&force=
- Gender Shades Web Site: http://gendershades.org/
- Keyes, Oz. (2019). Our Face Recognition Nightmare Began Decades Ago. Now It’s Expanding. Vice Tech / Motherboard. https://www.vice.com/en_us/article/d3a7ym/trump-didnt-create-our-face-recognition-nightmare-hes-just-expanding-it
- Stark, Luke. (2018). Facial Recognition is the Plutonium of AI. Crossroads (XRDS): The ACM Magazine for Students 25(3): 50-55. https://dl.acm.org/doi/pdf/10.1145/3313129
- Buolamwini, Joy & Gebru, Timnit. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research 81: 1-15. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
- Krishnapriya, K.; Vangara, K.; King, M. C.; Albiero, V.; & Bowyer, K. (2019). Characterizing the Variability in Face Recognition Accuracy Relative to Race. Paper presented to the Bias Estimation in Face Analytics (BEFA) Workshop, Computer Vision and Pattern Recognition (CVPR). https://arxiv.org/pdf/1904.07325
The above readings emphasize artificial intelligence, computing, and facial recognition, but with a different set of readings this case study could be used in a variety of other ways. For example, for additional historical context, consider:
- Browne, Simone. (2015). Dark Matters: On the Surveillance of Blackness. Duke University Press. https://www.dukeupress.edu/dark-matters (book)
Warm-Up Activity: Assess the Risks
A small group activity.
Brainstorm a list of all of the relevant groups affected by Project Green Light mentioned in the videos and readings. (For example: business owners, police, criminals, taxpayers, employees who work at Project Green Light locations, crime victims, … ) It might help to make them as specific as possible (e.g., business owners who participate vs. those who can’t afford to participate; residents vs. non-residents; women with dark skin [see Gender Shades article above]). Next make a list potential risks/harms that might affect each group. Risks/harms may be tangible/measurable (false arrest, wasted money) but also less tangible (loss of privacy, feeling of safety/security).
Discussion Questions:
These can also be used as short essay prompts.
- What ethical issues are raised by the police use of facial recognition to identify peaceful protesters referenced by Tawana?
- Detroit is about 80% black. What role should race play in the public discussion of Project Green Light?
- How does the paid “priority” system change your evaluation of Project Green Light? Why?
- Is it significant that Project Green Light places cameras indoors on private property? Why/why not?
- Business owner PJ participates in Project Green Light but declines to use a flashing green light. Why? How much does the project depend on actual flashing green lights?
- PJ describes a situation where Project Green Light is used to catch a criminal. Which of Project Green Light’s features were needed? That is, to what degree are a centralized network connected to police, very high-resolution imagery, high-speed fiber optic network connections, automated facial recognition, real-time police monitoring, or other Project Green Light features necessary to achieve the public policy goals of the system?
- According to PJ and Bella, if Project Green Light includes automated facial recognition this is worrying to them in a way that other surveillance is not. Do you agree? Which concerns are specific to automated facial recognition and which ones are not?
- Project Green Light was initially implemented with little oversight and few rules or procedures. Is it possible to write laws and/or procedures that would solve public concerns about this system?
- At least $6 million has been spent on Project Green Light to date. Tawana proposed that we should evaluate this system not just in terms of its own goals (“Does it work?”), but in comparison to what the same amount of money spent in some other way might have achieved. In your judgement would Project Green Light do well on this kind of assessment?
- If facial recognition technology improves dramatically does this reduce the potential risks of Project Green Light or worsen them? Why?
Update: As of June 16, 2020 local news media are reporting allegations that Project Green Light has been used on crowds during #BLM protests to identify peaceful protesters who are not social distancing and charge them with a violation of the health order. In Michigan this carries up to a $1,000 fine for a first offense. Protesters also allege that Project Green Light is being used to identify peaceful protesters whose immigration status is in question and pass this information to the US Department of Homeland Security. Additional discussion question: Do these scenarios affect your assessment of this system? Why or why not?
Note: None of these discussion questions ask whether Project Green Light achieves its own stated goals — that is, how effective is Project Green Light at reducing crime. As far as we can determine there is not yet enough available information to answer this question, but people in Detroit and other cities still have to move ahead with decisions about these systems.
Credits
This material was originally produced for “Applied Data Science 503: Data Science Ethics” at the University of Michigan. It was released publicly as part of our Emergency ESC initiative. Many thanks to Tawana Petty, PJ, and Bella for agreeing to share this material publicly. Additional thanks to Megan Rim, Amanda Stanhaus, Yuchen Chen, Divya Ramesh, Matthew Firsten, Cy Abdelnour, and Nikki Muench.