FAIR PReSONS: Building a fair and trustworthy AI System for recidivism prediction in Greece, Portugal and Bulgaria.

Mr Andreas Siafakas

Project Manager

FAIR-PReSONS, a two-year DG JUST project, funded by the European Union and coordinated by the University of the Aegean (GR), aims to address and mitigate bias in recidivism data to ensure fair and equitable decisions in the criminal justice systems at the national and European level from a gender perspective. The project focuses on the collection and digitization of data from prisons and offense management systems (OMS) across participating countries, Greece, Portugal and Bulgaria, structuring this data into knowledge graphs, and making it accessible via a dedicated data portal. A preliminary gender analysis will be conducted to identify and address gender biases in recidivism prediction systems. The project employs advanced algorithms and integrates knowledge graphs with artificial neural networks (ANNs), aiming to enhance the accuracy and explainability of predictions. This holistic approach to fair AI aspires to support unbiased decision-making, thereby assisting judges and legal practitioners in their effort to make unbiased fair decisions and mitigate gender stereotypes.

At the core of the project, FAIR-PReSONS aims to provide a holistic approach to assessing and mitigating bias in recidivism data. This involves developing a system that delivers fair decisions by combining Knowledge Graphs (KGs) and Artificial Neural Networks (ANNs). The bias assessment will be carried out through a combination of statistical tools (such as correlation analysis and chi-square tests), ontology-based methods (using domain ontologies to detect biases), and machine learning-based data classification measures (e.g., positive predictive value and F-score).

To mitigate bias, the project takes a two-step approach. First, it focuses on fair ontology/KG engineering, using innovative techniques like semi-automatic KG generation and machine learning for building knowledge graphs from training data. This step acknowledges that biases may still persist due to human involvement in the knowledge engineering process. Therefore, FAIR-PReSONS proposes a tool-supported algorithm to mitigate bias at both the ontology/schema and data levels.

The second step addresses biases in real-world data, particularly those hidden correlations between protected attributes (like gender or race) and other variables. To address this, ANN structures with fairness constraints will be used, incorporating fairness metrics like disparate impact, statistical parity, and equalized odds. By integrating these fairness measures into the learning process of ANNs, the project will ensure that the recidivism predictions are unbiased. Several ANN models, including Graph Neural Networks (GNNs), Generative Adversarial Networks (GANs), and knowledge-based neuro-fuzzy networks, will be evaluated. The ultimate goal of this process is to provide fair prediction and decision-making results.

In addition, the project will focus on explainability analysis—the development of tools that help explain and interpret the decisions made by the AI system. These tools will deliver mechanisms that facilitate unbiased decision-making, offering clear explanations and recommendations related to the ontologies, datasets, and algorithms used throughout the design process.

AI in Criminal Justice: Balancing Benefits and Ethical Concerns

AI has the potential to significantly enhance the criminal justice system by improving the accuracy, speed, and efficiency of recidivism prediction models. These models help predict whether a convicted individual is likely to re-offend, aiding in decisions about parole, probation, or sentencing. However, while the power of AI in reducing crime re-occurrence is evident, concerns remain about the fairness, transparency, privacy, and accountability of these systems.

A prime example of AI’s controversial application in criminal justice is the COMPAS software used in the U.S. to predict recidivism. Investigative reports raised questions about racial bias within the system, particularly against African American defendants. Such issues highlight the need for recidivism prediction tools that are trustworthy—free of bias and transparent in their decision-making process.

One of the key challenges in developing trustworthy AI systems lies in ensuring fairness, particularly regarding sensitive factors like gender or race. Fairness in this context requires that individuals not be judged based on factors outside their control, such as their gender. Achieving this level of fairness is complex, as it involves determining whether protected attributes causally influence outcomes like recidivism scores.

The ethical concerns surrounding AI in criminal justice extend beyond fairness. Stakeholders are also wary of privacy violations, the misinterpretation of AI processes, and the lack of accountability in AI-driven decisions. As the European Commission (EC) has emphasized, AI systems must be ethical, lawful, and robust to be considered trustworthy. This entails adherence to four key ethical principles: respect for human autonomy, prevention of harm, fairness, and explicability.

To build trustworthy AI systems, the EC outlines seven key requirements:

  1. Human agency and oversight: AI systems should enhance human autonomy, allowing people to make decisions without undue influence from the technology. Human involvement in the decision-making process is essential to maintaining accountability and trust.
  2. Technical robustness and safety: AI systems should be secure and reliable, operating consistently across different conditions and ensuring that the same input yields the same output.
  3. Privacy and data governance: Data protection is critical, especially in sensitive areas like criminal justice. AI systems must safeguard personal information and ensure privacy rights are respected.
  4. Transparency: The AI design process, including data collection and the system’s decision-making logic, should be well-documented and open to scrutiny. This transparency enables smooth auditing and allows users to understand why certain decisions are made.
  5. Non-discrimination and fairness: AI systems must actively avoid bias, with fairness being considered from both ethical and statistical perspectives. Bias mitigation must be incorporated into the design process to ensure fairness across all demographic groups.
  6. Societal and environmental well-being: AI systems should contribute positively to society, balancing benefits against any potential harm.
  7. Accountability: AI developers and users must be answerable for the outcomes of the systems they create or operate. Responsibility is key to ensuring trust in AI.
Ensuring Fairness and Reliability

One of the most crucial aspects of recidivism prediction tools is their consistency. Offenders with similar backgrounds and offenses should receive similar recidivism scores, regardless of their demographic attributes. Additionally, reliability is necessary for these systems to maintain consistent predictions across various jurisdictions and conditions. Lastly, explainability remains a critical requirement for AI systems in recidivism prediction. While transparency refers to the design process, explainability focuses on understanding why a system arrived at a particular decision. This is essential for building trust among stakeholders, ensuring they comprehend the rationale behind AI-driven conclusions.

ITML as a lead technical partner in the development of the FAIR PeRSONS AI System will be in charge in taking into account not only the efficiency of the bias mitigation measures taken but also the system’s consistency, reliability, transparency and explainability, as the project’s optimum goal is to assist judges and legal professionals into making fair decisions regarding the offenders’ management during their sentence’s cycle and after their sentence has been completed. This implies that a major challenge the project aims to overcome lies in the trust that needs to be fostered within the juridical system on a national and European level towards the use of AI for implementing their valuable services. Actually, the system’s final output, FAIR PReSONS AI System, needs to persuade at least 80 judges who will be trained in its usage and will also validate its predictions’ accuracy and users experience as a whole. The goal has an immense importance as it will be setting a precedent for ethical AI in the criminal justice system, while at the same time will contribute to a series of advantages that range from improved decision-making, reduced bias and discrimination, and enhanced fairness in sentencing, to reduced risk of wrongful convictions and excessive sentencing, and more effective allocation of resources

Conclusion

The FAIR-PReSONS project seeks to address the most pressing issues in AI-driven recidivism prediction by developing a gender-aware, bias-free AI system. By integrating cutting-edge techniques like knowledge graphs, artificial neural networks, and bias mitigation algorithms, the project aims to set new standards for fairness, transparency, and accountability in AI. Moreover, by adhering to EC guidelines for trustworthy AI, FAIR-PReSONS ensures that the ethical concerns surrounding privacy, fairness, and explainability are front and center in the design process. The project is a significant step forward in creating AI systems that are not only technologically advanced but also socially responsible and fair.

Find out more by visiting FAIR PReSONS !