Home   >   Projects   >   AEQUITAS

AEQUITAS

ASSESSMENT AND ENGINEERING OF EQUITABLE, UNBIASED, IMPARTIAL AND TRUSTWORTHY AI SYSTEMS

Field
European
Date
01/11/2022 - 31/10/2025
Industry
  • Industry 4.0
Budget
3,493,990
Funded by

CE

Website
Video

PROJECT INFORMATION

DESCRIPTION

AI-based decision support systems are increasingly used in industry, in the public and private sectors, and in policy making. As our society faces a dramatic increase in inequalities and intersectional discrimination, we must prevent AI systems from amplifying this phenomenon, but rather mitigate it. To rely on these systems, subject matter experts and stakeholders must have confidence in the decisions. Equity is one of the main principles of reliable AI promoted at the EU level. How these principles, in particular equity, translate into technical, functional, social and legal requirements in AI system design remains an open question. We also do not know how to check whether a system complies with these principles and repair it if it does not. AEQUITAS proposes the design of a controlled experimentation environment for developers and users to create controlled experiments to - evaluate bias in AI systems, e.g., by identifying potential causes of bias in data, algorithms, and interpretation of results, - provide, where possible, effective methods and engineering guidelines to remediate, provide fairness-by-design guidelines, methodologies and software engineering techniques to design new bias-free systems The experimentation environment generates synthetic data sets with different characteristics that influence fairness for a test in laboratories. Real cases of use in the fields of healthcare, human resources and challenges for disadvantaged social groups test the experimentation platform and demonstrate the effectiveness of the proposed solution. The experimentation platform will be integrated into the on-demand AI platform to drive adoption, but a standalone version will allow fairness testing of AI systems in-house while preserving privacy. AEQUITAS has a strong consortium of AI experts, experts in the use sectors, social scientists and associations for the defense of the rights of minorities and discriminated groups.

AI-based decision support systems are increasingly used in industry, in the public and private sectors, and in policy making. As our society faces a dramatic increase in inequalities and intersectional discrimination, we must prevent AI systems from amplifying this phenomenon, but rather mitigate it. To rely on these systems, subject matter experts and stakeholders must have confidence in the decisions. Equity is one of the main principles of reliable AI promoted at the EU level. How these principles, in particular equity, translate into technical, functional, social and legal requirements in AI system design remains an open question. We also do not know how to check whether a system complies with these principles and repair it if it does not. AEQUITAS proposes the design of a controlled experimentation environment for developers and users to create controlled experiments to – evaluate bias in AI systems, e.g., by identifying potential causes of bias in data, algorithms, and interpretation of results, – provide, where possible, effective methods and engineering guidelines to remediate, provide fairness-by-design guidelines, methodologies and software engineering techniques to design new bias-free systems The experimentation environment generates synthetic data sets with different characteristics that influence fairness for a test in laboratories. Real cases of use in the fields of healthcare, human resources and challenges for disadvantaged social groups test the experimentation platform and demonstrate the effectiveness of the proposed solution. The experimentation platform will be integrated into the on-demand AI platform to drive adoption, but a standalone version will allow fairness testing of AI systems in-house while preserving privacy. AEQUITAS has a strong consortium of AI experts, experts in the use sectors, social scientists and associations for the defense of the rights of minorities and discriminated groups.

Technological capabilities

IA
Real-time data analysis