Home   >   Projects   >   XAI-DISINFODEMICS

XAI-DisInfodemics

eXplainable AI for disinformation and conspiracy detection during infodemics

Field
National
Date
01/12/2021 - 30/11/2024
Industry
Budget
Funded by
Website
Video

PROJECT INFORMATION

DESCRIPTION

In this project proposal we aim to build a holistic socio-technical strategy to fight infodemics. We adopt a human-in-the-loop approach to increase false information detection accuracy, while also improving users' digital literacy. To address the challenges of disinformation, we need interdisciplinary collaboration, and the development of tools that private and public entities can use. EXplainable Artificial Intelligence (XAI) could provide these tools addressing the problem of disinformation detection from a multimodal perspective going beyond the analysis of textual information. We aim to counter disinformation and conspiracy theories on the basis of fact checking of scientific information. Moreover, we aim to be able to explain not only the AI models in their decision-making but also the persuasion and psychographics techniques that are employed to trigger emotions in the readers and make disinformation and conspiracy theories believable and propagate among the social network users. The final AI tool should also help users to spot in documents those parts whose aim is to grab readers' attention by emotional appeals and that alert about a poor quality of the information. The AI tool will provide a complete picture of the piece of information that allows the user to know which kind of content is consuming. The tool is thought for the general public and its use will allow media and information platforms to be rated based on the quality of their health information, providing criteria for developing search engines that specifically prioritize the information that fulfils these quality standards.

In this project proposal we aim to build a holistic socio-technical strategy to fight infodemics. We adopt a human-in-the-loop approach to increase false information detection accuracy, while also improving users’ digital literacy. To address the challenges of disinformation, we need interdisciplinary collaboration, and the development of tools that private and public entities can use. EXplainable Artificial Intelligence (XAI) could provide these tools addressing the problem of disinformation detection from a multimodal perspective going beyond the analysis of textual information. We aim to counter disinformation and conspiracy theories on the basis of fact checking of scientific information. Moreover, we aim to be able to explain not only the AI models in their decision-making but also the persuasion and psychographics techniques that are employed to trigger emotions in the readers and make disinformation and conspiracy theories believable and propagate among the social network users. The final AI tool should also help users to spot in documents those parts whose aim is to grab readers’ attention by emotional appeals and that alert about a poor quality of the information. The AI tool will provide a complete picture of the piece of information that allows the user to know which kind of content is consuming. The tool is thought for the general public and its use will allow media and information platforms to be rated based on the quality of their health information, providing criteria for developing search engines that specifically prioritize the information that fulfils these quality standards.

Technological capabilities

IA
Natural Language Processing (NLP)