Abstract
Predictions in learning analytics are made to improve tailored educational interventions. However, it has been pointed out that machine learning algorithms might discriminate, depending on different measures of fairness. In this paper, we will demonstrate that predictive models, even given a satisfactory level of accuracy, perform differently across student subgroups, especially for different genders or for students with disabilities.
Keywords: Learning Analytics; Fairness; OULAD; At-Risk Prediction