My PhD project addresses the question which formal constraints need
to be imposed on algorithmic decision systems in order to ensure that they don’t
produce outcomes that are in conflict with relevant ethical norms. My aim is to draw
on existing literature on discrimination and equality of opportunity to analyze how algorithmic decision-making
procedures and their outcomes can potentially be morally objectionable and, consequently, which ethical norms they might violate. Making use of the mathematical formalism of causal modelling, I then intend to
translate the resulting norms into formal language, so that they are applicable to algorithmic systems.
Other philosophical projects I pursue are in formal and social epistemology, and causal modelling. Besides that, I occasionally venture into the realms of statistics and machine learning.
Bias as causal inadequacy: An analysis of the COMPAS dataset with causal inference methods
In this paper I first propose to conceive the notion of algorithmic bias as causal inadequacy, meaning that a sensitive attribute such as ethnicity or gender has a stronger causal effect on the predictor of an event, than on the event itself. Using the publicly available COMPAS dataset, I then use this definition together with matching methods for causal inference to show that the COMPAS algorithm is indeed racially biased.
Epistemic utility arguments for proportionality
In this paper I use the framework of epistemic utility theory to argue for a proportionality constraint on representations of causal relations. I further use the framework to develop a weakened proportionality constraint that applies to imperfect regularities, or probabilistic causal relations.