Algorithmic Reparation Workshop
Ifeoma Ajunwa, Kendra Albert, Anhong Guo, Sarah T. Hamid, Alex Hanna, Anna Lauren Hoffmann, Mutale Nkonde & Afsaneh Rigot
Machine learning has an inequality problem that is now widespread and well known. The field of “fair machine learning” (FML) has emerged in response, positing mathematical correctives to account for and remove direct and proxy indicators of protected class attributes–race, class, gender, disability etc–within machine learning models. Although FML predominates and continues to thrive, its effects have been wanting, and thinkers are beginning to challenge the “fairness” value standard (Birhane and Guest 2020; Bui and Noble 2020; Davis et al. 2021; Hanna et al. 2020; Hoffmann 2019; Mohamed et al. 2020; So et al. 2022).
Fairness models seek to erase demographic differences and achieve unbiased outputs. Such aspirational neutrality is intrinsically flawed, ignoring the ways history, identity, and social systems entwine. In this way, “fairness” approximates colorblind racism and its gendered, heteronormative, and ableist cousins.
Algorithmic Reparation is a response and alternative to FML, one that centralizes rather than obviates levers of inequality in machine learning systems. Rooted in theories of Intersectionality (Cho et al. 2013; Crenshaw 1990; Collins 2002, 2019) and movements for reparation (Bittker, 1972; Coates, 2014; Henry, 2009), this approach is committed to empowerment at the margins and systemic redress. First introduced in an article published by Big Data & Society (Davis, Williams and Yang 2021), we invite participants to begin actioning algorithmic reparation in a 2-day workshop at the University of Michigan, September 30-October 1, 2022.
UNC Chapel Hill, School of Law
Ifeoma Ajunwa (@iajunwa) joined Carolina Law in January of 2021 as an Associate Professor of Law with tenure. She is also the Founding Director of the AI Decision-Making Research Program. Professor Ajunwa has been Faculty Associate at the Berkman Klein Center at Harvard Law School since 2017. Professor Ajunwa’s work is published or forthcoming in high impact factor law reviews of general interest as well as, the top law journals for specialty areas such as: anti-discrimination law (Harvard Civil Rights-Civil Liberties Law Review), employment and labor law (Berkeley Journal of Employment and Labor Law), and law and technology (Harvard Journal of Law and Technology). She has published op-eds in the New York Times, Washington Post, The Atlantic, etc., and her research has been featured in major media outlets such as the New York Times, the Wall Street Journal, CNN, Guardian, the BBC, NPR, etc. In 2020, she testified before the U.S. Congressional Committee on Education and Labor, and has spoken before governmental agencies, such as, the Consumer Financial Protection Bureau (the CFPB), and the Equal Employment Opportunity Commission (the EEOC).
Harvard University, Cyber Law Clinic
Kendra Albert (@KendraSerra) is a public interest technology lawyer with a special interest in computer security law and freedom of expression. They serve as a clinical instructor at the Cyberlaw Clinic at Harvard Law School, where they teach students to practice law by working with pro bono clients. Kendra is also the founder and director of the Initiative for a Representative First Amendment.
They serve on the board of the ACLU of Massachusetts and the Tor Project, and provide support as a legal advisor for Hacking // Hustling. In their free time, Kendra enjoys giving away other people’s money, playing video games, and making people in power uncomfortable.
University of Michigan, Computer Science & Engineering
Anhong Guo (@AnhongGuo) is an Assistant Professor in Computer Science & Engineering at the University of Michigan. Anhong completed his Ph.D. in the Human-Computer Interaction Institute, Carnegie Mellon University. He is also an inaugural Snap Inc. Research Fellow, a Swartz Innovation Fellow for Entrepreneurship, and a Forbes’ 30 Under 30 Scientist. Anhong has published in many top academic conferences and journals on interface technologies, wearable computing, accessibility and computer vision. Before CMU, he received his Master’s in HCI from Georgia Tech, and Bachelor’s in Electronic Information Engineering from BUPT. He has also worked in the Ability and Intelligent User Experiences groups in Microsoft Research, the HCI group of Snap Research, the Accessibility Engineering team at Google, and the Mobile Innovation Center of SAP America.
Carceral Tech Resistance Network
Sarah T. Hamid (@tsnvaa) is an abolitionist and organizer working in the Pacific Northwest. She leads the policing technology campaign at the Carceral Tech Resistance Network, an archiving and knowledge-sharing network for organizers building community defense against the design, roll-out, and experimentation of carceral technologies. Sarah co-founded the inside/outside research collaboration, the Prison Tech Research Group, sits on the board of the Lucy Parsons Lab in Chicago, and helped create the #8toAbolitioncampaign: a police and prison abolition resource built during the 2020 uprisings against state violence.
Distributed AI Research Institute/DAIR
Dr. Alex Hanna (@alexhanna) is Director of Research at the Distributed AI Research Institute (DAIR). A sociologist by training, her work centers on the data used in new computational technologies, and the ways in which these data exacerbate racial, gender, and class inequality. She also works in the area of social movements, focusing on the dynamics of anti-racist campus protest in the US and Canada.
Dr. Hanna has published widely in top-tier venues across the social sciences, including the journals Mobilization, American Behavioral Scientist, and Big Data & Society, and top-tier computer science conferences such as CSCW, FAccT, and NeurIPS. Dr. Hanna serves as a co-chair of Sociologists for Trans Justice, as a Senior Fellow at the Center for Applied Transgender Studies, and sits on the advisory board for the Human Rights Data Analysis Group and the Scholars Council for the UCLA Center for Critical Internet Inquiry. FastCompany included Dr. Hanna as part of their 2021 Queer 50, and she has been featured in the Cal Academy of Sciences New Science exhibit, which highlights queer and trans scientists of color. She holds a BS in Computer Science and Mathematics and a BA in Sociology from Purdue University, and an MS and a PhD in Sociology from the University of Wisconsin-Madison.
University of Washington, Information School
Anna Lauren Hoffmann (@annaeveryday) is currently an Assistant Professor with The Information School at the University of Washington where she is also co-founder and co-director of the UW iSchool’s AfterLab. She is also a senior fellow with the Center for Applied Transgender Studies and affiliate faculty with the UW iSchool’s DataLab. Prior to joining the UW iSchool, she was a postdoctoral scholar at the UC Berkeley School of Information and received her PhD from the School of Information Studies at the University of Wisconsin-Milwaukee. Dr. Hoffmann’s work has appeared in academic venues like New Media & Society, Review of Communication, JASIST, and Information, Communication, and Society and her research has been supported by the National Science Foundation. In addition, her public writing has appeared in The Guardian, Slate, The Seattle Times, and The Los Angeles Review of Books. She lives in Seattle, WA with her wife and two kids.
AI for the People
Mutale Nkonde (@Mutalenkonde) started her career as a broadcast journalist before transitioning into the world of tech. She currently sits on the Tik Tok Content Moderation Advisory Board, advises the Center of Media, Technology and Democracy at McGill University and is a key constituent for the UN 3C Table on AI. Now she is the founding director of AI for the People, a non profit communications firm that uses journalism, arts and culture to advance racial justice in tech. In 2021 AI for the People launched their biometric justice vertical by producing a film supporting a ban of facial recognition in New York State, in partnership with Amnesty International, watch it here. Nkonde writes widely on racial impacts of advanced technical systems, is a widely sought after media commentator and seeks to create a safe space for Black technologists who feel marginalized within the wider tech sector. She also led a team that introduced the Algorithm and Deepfakes Accountability Acts and the No Biometric Barriers Act to the US House of Representatives in 2019.
Belfar Center for Science and International Affairs
Afsaneh Rigot is an analyst, researcher, and advocate covering issues of law, technology, LGBTQ, refugee, and human rights. She is also a senior researcher at ARTICLE 19 focusing on the Middle East and North African (MENA) human rights issues and international corporate responsibility and a 2020-2021 fellow at the Technology and Public Purpose (TAPP) project at the Harvard Kennedy School’s Belfer Center for Science and International Affairs. She is also an advisor at the Cyberlaw Clinic at Harvard. Her broader work and her research pose questions about the effects of technology in contexts it was not designed for and the effects of western-centrism on vulnerable and/or hard-to-reach communities. It also looks at how the power-holding corporations can be constructively engaged with.
At ARTICLE 19, Afsaneh continues to lead cross-country research on the impact of technology on LGBTQ people in the MENA uncovering how police and states use technology to target, harass and arrest the community based on their identity. Independently, she has conducted the first research on the use of digital evidence and legal frameworks in the prosecution of LGBTQ people in courts: Digital Crime Scenes: The Role of Digital Evidence in the Persecution of LGBTQ People in Egypt, Lebanon and Tunisia. This report covers the role of technology companies and builders and how tech can be build to mitigate these human rights abuses.
During her TAPP fellowship, Afsaneh developed the first iteration of her methodology and concept using experiences and knowledge in implementing company change with those most impacted-centered. Design From the Margins report (DFM), outlines a design process that centers the most impacted and marginalized users from ideation to production, pushes the notion that not only is this something that can and must be done, but also that it is highly beneficial for all users and companies.
This event is co-hosted by the Digital Studies Institute and the Center for Ethics, Society, & Computing at the University of Michigan, the Humanising Machine Intelligence Project at the Australian National University, and the Tech Ethics Center at the University of Notre Dame, the workshop will combine efforts from social scientists, computer scientists, activist leaders, and industry representatives. The workshop includes invited panel presentations and hands-on exercises, featuring Algowritten , TheirTube, and others, that attend to machine learning across domains and within social and institutional contexts.
Co-Directors: Apryl Williams (email@example.com), Jenny Davis (firstname.lastname@example.org)