Home

Machine Learning is one of the main driving force in Artificial Intelligence. It is nowadays used for decision making in medical applications, speech recognition, autonomous vehicles to cite a few examples. While machine learning can be very beneficial, the models that directly impact individuals could adversely affect some of the rights presented in the EU Charter of Fundamental Rights. This sparked the development of Fair Machine Learning, a research field whose goal is to identify and prevent discriminatory behaviors. Its importance was recently strengthened by the AI Act proposal to the European Commission which cites non-discrimination and equality between women and men as two of the fundamental rights that should be protected in systems that rely on artificial intelligence. A key limit of the current literature is that the mechanisms used to enforce fairness are still not theoretically well understood, in particular when the final deterministic decisions are derived from stochastic predictions. Furthermore, fairness is usually not the only requirement to obtain trustworthy systems. The main objective of FaCTor is to address these shortcomings by theoretically investigating the gap that exists between stochastic and deterministic models. This project points toward the development of trustworthy and more socially acceptable machine learning solutions by jointly studying fairness with other performance measures, namely privacy and utility. Furthermore, it proposes to confront the developed approaches to practical problems where new challenges emerge. The end goal is to make the models more accountable and in line with the requirements of the law, ensuring that the benefits of machine learning are not limited to a subset of the population.

Comments are closed.