The FAIR-PReSONS project aims to develop a bias-free Artificial Intelligence (AI) system for fair prediction of recidivism, focusing on gender equality. It adheres to the EU legislation on non-discriminatory AI. The project will conduct a gender analysis to understand differential impacts. Data, including release papers, will be collected from prisons and criminal justice organizations. These data will be digitized, documented, and standardized according to EU standards.

ITML’s role

In the FAIR-PReSONS project, ITML’s role encompasses two main areas: researching fair AI and bias-free prediction algorithms for recidivism and designing and implementing the bias-free recidivism prediction/decision system. Regarding research, ITML will explore advanced algorithms proposed by other fair AI projects, integrating Knowledge Graphs (KGs) into artificial neural networks (ANNs) like graph neural networks (GNNs) and generative adversarial networks (GANs) to improve prediction accuracy and explainability. They will also investigate Linked Open Data (LOD) and KG procedures to enhance fair AI solutions. During system design and implementation, ITML will develop tools and techniques for bias assessment and mitigation, involving statistical analysis, ontology-based methods, and machine learning measures. Additionally, ITML will conduct extensive testing of the developed modules and integrated system using various datasets, LOD/KGs, and algorithms to optimize accuracy and explainability.