My PhD project addresses the question which formal constraints need
to be imposed on algorithmic decision systems in order to ensure that they don’t
produce outcomes that are in conflict with relevant ethical norms. My aim is to draw
on existing literature on discrimination and equality of opportunity to analyze how algorithmic decision-making
procedures and their outcomes can potentially be morally objectionable and, consequently, which ethical norms they might violate. Making use of the mathematical formalism of causal modelling, I then intend to
translate the resulting norms into formal language, so that they are applicable to algorithmic systems.
Other philosophical projects I pursue are in formal and social epistemology, and causal modelling. Besides that, I occasionally venture into the realms of statistics and machine learning.
Projects
-
Counterfactual Fairness, Equalized Odds, and Calibration: Yet Another Impossibility Theorem
When predictive models are used in decision-making processes, these models should satisfy certain mathematical fairness constraints which ensure that predictions are not discriminatory. Counterfactual fairness, equalized odds, and groupwise calibration are three of the most widely discussed such fairness constraints. In this paper we make two contributions. First, we show that for a certain class of prediction tasks, whenever a predictive model satisfies counterfactual fairness, it necessarily violates both, equalized odds and groupwise calibration. We characterize this class as prediction tasks in contexts in which the sensitive attribute has a (possibly mediated) effect on the variable that is to be predicted. We discuss different ways to avoid this conclusion by relaxing one or more of the premises of the proof. Secondly, we propose a new fairness constraint called causal relevance fairness, which is a relaxation of counterfactual fairness that retains its intuitive and philosophical appeal while being compatible with equalized odds in all possible prediction task contexts.