DIMACS

DIMACS REU 2018

General Information

me
Student: Michael Yang
Office: CoRE Building, Room 442
School: Minerva Schools at KGI
E-mail: mwyang {at} my school's website
Project: Fairness in Machine Learning
Mentor: Prof. Anand Sarwate
Group Members: Jordan Trout, University of Maryland, Baltimore County
Priyanka Mohandas, Rutgers University

Introduction and Project Description

At a high level, I am working with Prof. Sarwate and his lab to understand bias in algorithms and how machine learning can be made fair. This task draws on an understanding of social and philosophical issues as well as technical ones. For the summer, our group's main goal is to come to a deep understanding of the existing technical notions of fairness, replicate existing studies of fairness on the COMPAS dataset (arguably the dataset that catalyzed the whole field in the first place), and see how analyses on the COMPAS dataset may be extended to a propriety dataset on home loan approvals. We also plan on producing an interactive Jupyter notebook and Python library so that other researchers and practitioners may easily assess their ML algorithms for fairness.

More on the COMPAS dataset: In 2016, ProPublica found that COMPAS, a proprietary tool for scoring the likelihood of recidivism in individuals, was biased against black individuals. However, the authors of COMPAS disputed this result. Could both parties be correct? It turns out that yes, they could. Computer scientists found that ProPublica and COMPAS authors were using different technical notions of fairness and, moreover, that it was impossible to satisfy both notions of fairness simultaneously (Chouldechova; Different, non-technical source).

In addition to my work with Prof. Sarwate, I'm interested in causality in machine learning (which is highly related to fair ML), computational complexity (less related to fair ML), and programming language theory (even less related to fair ML).


Weekly Log

Week 1:
In this first week, I met with Prof. Sarwate and a few of the other lab members for the first time. I printed out many technical papers to read, but have only read a survey report on different technical notions of fairness. I also downloaded the HMDA housing loan dataset and made sure that I could load the data and perform basic statistical operations on it. This dataset poses a challenge for analysis since it contains several orders of magnitude more observations than the COMPAS dataset and furthermore does not have information about the "ground truth" of loan applicants (it does not contain information about who actually paid their loan or not; moreover, if a loan is denied to somebody, we will never know their true ability to pay back the loan).
I feel somewhat nervous of my ability to make a contribution in the field. A while back, I tweeted somebody who is well-published on fair ML and asked what the next most important topic to work on is. This is how he responded: I must remind myself that learning is also good, and I have a lot of technical topics to tide myself over before inspiration strikes (maybe).
Week 2:
This week, I:
  • loaded a random sample of the HMDA dataset using pandas's ability to load CSVs in chunks. Others in my group can now use this code to also load a sample of the data to run experiments.
  • used statistical matching methods on the COMPAS dataset in order to replicate ProPublica's analysis from the angle of (individual) counterfactual fairness. This took a while because of dependency hell in R and unresolved issues in the packages that I was using.
  • read a good smattering of papers [1-7]. I would particularly recommend [7] for somebody who's new to the field.
  • selected papers to integrate into our fairness overview. When our group met with the professor again, I got clearer on the work-product he envisioned. As I now understand it, our task is to not only clearly write out different technical notions of fairness in the same notation, but to also write out the mathematical pseudo-code for how to implement and assess these notions on a dataset (with the eventual hope of writing a Python library to handle everything, though my other two group members will be responsible for this).
Next week, I plan on:
  • writing out at least 1/3 of the papers' fairness definitions out in the same language along with pseudo-code to implement.
  • reading the papers on multi-accuracy and fairness gerrymandering.
  • assessing counterfactual fairness (as I did this past week with COMPAS) on two metro areas in the HMDA dataset, one identified by Reveal [5] as highly biased and the other as unbiased.
[1]
M. Wattenberg, F. Viegas, and M. Hardt, “Attack discrimination with smarter machine learning.” [Online]. Available: http://research.google.com/bigpicture/attacking-discrimination-in-ml/. [Accessed: 30-May-2018].
[2]
M. Hardt, “Equality of Opportunity in Machine Learning,” Google AI Blog. .
[3]
A. Chouldechova and M. G’Sell, “Fairer and more accurate, but for whom?,” arXiv:1707.00046 [cs, stat], Jun. 2017.
[4]
C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, “Fairness Through Awareness,” in Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, New York, NY, USA, 2012, pp. 214–226.
[5]
E. Martinez and A. Glantz, “How Reveal identified lending disparities in federal mortgage data,” p. 26.
[6]
J. A. Jeff Larson, “How We Analyzed the COMPAS Recidivism Algorithm,” ProPublica, 23-May-2016. [Online]. Available: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm. [Accessed: 31-May-2018].
[7]
S. Mitchell and J. Shadlen, “Mirror Mirror: Reflections on Quantitative Fairness,” 2018. [Online]. Available: https://speak-statistics-to-power.github.io/fairness/. [Accessed: 01-Jun-2018].
Week 3:
This week, I started a LaTeX file to keep track of the fairness definitions that our group has been gathering. While I was typesetting, I discovered relationships between metric parity notions of fairness and conditional independence ones that I hadn't really internalized before (but should be obvious to anybody who is actually an expert in statistics). Still, I do feel that this relationship means that the categorical distinction between metric parity and conditional independence hinders more than it helps.
I also began to look into ways of assessing statistical independence. For binary variables (and categorical variables generally), we have our good ol' \(\chi^2\) test from high school biology class. Apparently, according to one paper [1], extensions to continuous variables are "extensively studied," and rather, it's the discrete cases that lack active research. Perhaps this is so; maybe the tests for continuous variables are very well-known to all studied statisticians, but to a silly undergraduate like me they are definitely more obscure. I will dig through that paper's references to see if I can find something useful.
I also read Kusner et al.'s work on causality and fairness [2-3]. The essential idea seems straightforward enough—provided that one understands probabilistic graphical models and can produce a valid one for the task at hand. Neither of these are easy tasks though, it seems (the complicated inner mechanics of PGMs makes a user-friendly programming library, part of our project, potentially difficult).
[2] provides a "deconvolution" approach to learning a counterfactually fair classifier. The graphical model \(\mathcal{M}\) must, in a nutshell, meet certain constraints on the influence between a protected attribute \(\mathcal{A}\) and the output variable \(\mathcal{Y}\). The key strategy is to take \(\mathcal{M}\) and learn the unobserved variables \(\mathcal{U}\) and predict using \(\mathcal{U}\). In the case of recidivism, \(\mathcal{U}\) can be the inherent tendency to recidivate, which is never observed—instead, the data are arrests (which we know are biased).
This approach still relies on the validity of \(\mathcal{M}\) with respect to the influence of \(\mathcal{A}\) on \(\mathcal{Y}\). However, even if \(\mathcal{M}\) meets this constraint, \(\mathcal{M}\) may not actually obtain in the real-world. This is where [3] comes in. Given many potential models of the world \(\{\mathcal{M}\}\), we can train fairly with respect to all worlds \(\mathcal{M}\) through penalized optimization.
I did not get very far on the goals I set for myself last week. I will continue to work on them during this week. In particular, I will focus on producing a clean and tractable Zotero repository with accompanying literature review so that my partners can focus on getting our Python library into shape.
[1]
C. L. Canonne, I. Diakonikolas, D. M. Kane, and A. Stewart, “Testing Conditional Independence of Discrete Distributions,” arXiv:1711.11560 [cs, math, stat], Nov. 2017.
[2]
M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva, “Counterfactual Fairness,” arXiv:1703.06856 [cs, stat], Mar. 2017.
[3]
C. Russell, M. J. Kusner, J. Loftus, and R. Silva, “When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds. Curran Associates, Inc., 2017, pp. 6414–6423.
Week 4:
This week I worked on getting the Zotero library and literature review into shape. I have a general outline for the literature review, but somewhat ironically, I have much more to say about technical papers that go beyond observational notions of fairness than the observational notions themselves.
I also talked to my teammates about how to coordinate group work more effectively. We've instituted daily checks just so that we can get a feel for how what the others are working on and so that we can help each other out.
I attended two all-day events during the week. On Wednesday, we went to IBM in Yorktown Heights, where we were treated (or subjected) to a series of talks. The general intent is to get us to work at IBM, I believe—I wonder if IBM helps with the DIMACS funding. Overall, I came away slightly underwhelmed. Despite the presenters best efforts, I still feel that the corporate culture at IBM isn't as fast-moving as that of other companies. First, the morning talks (which are also the presentations given to a general, i.e. non-technical audience, such as customers or tourists) were not well-calibrated for our group at best and just completely inaccurate in its presentation of information at worst (I'm looking at you, quantum computing presentation). The talks during the second half of the day were better. I was excited to hear that IBM researchers are also working on fair ML and algorithmic transparency problems. But some of the presentations also fell quite flat, again because either the talk wasn't well-calibrated for our audience or because the presentation was just not well-presented.
On Friday, I went with my group to attend the Mechanism Design for Social Good Workshop at Cornell University. While only several of the talks were about technical results in fair ML, it was really exciting to hear so much related work from people in economics and sociology. In particular, Robert Manduca's work on the difference between allocation processes and the "marginal distributions" (the places where people get sorted into) was really great. It's not a new lesson for sure, but it's something important to remember as I'm getting lost in the weeds of fair ML, which is really just about deciding (i.e. allocating) fairly. For instance, if the ML decides loan approval rates, we can make sure that the loans are distributed "fairly," but is the background availability of loans themselves fair? What about the conditions of the loans themselves? It's not enough just to make sure that people get fairly sorted into society's different buckets—we have to make sure that the final distribution of buckets themselves—the possibilities of where people can end up—are also fair.
Week 5:
I've not much to say this week. I've been (slowly, still) making progress on the literature review. Writing is so hard when I don't know what I want to say. I think I've found a focus though: literature reviews that survey directions beyond observational notions of fairness don't seem to exist, and so, that is what I will write about. But in the meantime, I have to try and summarize (and summarize effectively) a bunch of other great reviews to serve as the introduction for my review! It's hard to know the difference between reinventing the wheel and reinventing the wheel to learn for one's own sake (maybe there isn't a difference).
Week 6:
This week I completed a first draft of my literature review on some of the more recent work in the fairness community, though there is still a lot of polishing work left to do. My team and I also came to a consensus about how to move forward with analyzing the mortgage data, which we will do in the remaining three (!!) weeks. I also gave my final presentation, in which I learned that it's very easy to go over the time limit. It was really quite odd: I felt as though the allotted time for my presentation flew by, without giving me time to present half of my slides. In contrast, time seemed to flow very slowly for the other students' presentations, and the students seemed to have ample time to present their work.
Week 7:
This week I continued to polish my literature review. I also helped the other students with our HMDA data analysis (pointing out bugs, explaining technical concepts, suggesting improvements to the code). I had somewhat of an inspiration for a philosophy paper on fairness in machine learning, but I won't have enough time to write the paper for the FAT* conference.



Presentations