Skip to main content

ANNOTATE-SenSIT:integrating disagreement and multimodal Sensor data with Speech, Image and Text for next-generation AI Systems

Project Reference
PID2024-156022OB-C33
Start year
2025
End year
2028
Project status
Vigent

ANNOTATE-SenSIT: integrating disagreement and multimodal Sensor data with Speech, Image and Text for next-generation AI Systems (PID2024-156022OB-C33) subprojecte del projecte coordinat ANNOTATE - integrAting disagreemeNt and seNsOr data for NexT-generation Ai sysTEms (PID2024-156022OB-C31) coordinated by Laura Plaza (Universidad Nacional de Educación a Distancia). Finançat per: Ministerio de Ciencia, Innovación y Universidades, programa de I+D de Generación de Conocimiento (MICIU/AEI/10.13039/501100011033/FEDER,UE). Participants: Universidad Nacional de Educación a Distancia, Universitat Politècnica de València i Universitat de Barcelona.

Resum: The ANNOTATE-SenSIT (Sensor, Speech, Image and Text) subproject focuses on the development of Human-Centric Artificial Intelligence (HCAI) systems, particularly the creation of multimodal datasets that integrate sensor data such as eye-tracking, as well as annotation metadata (demographic and feedback from the annotators) applying the Strong Learning With Disagreement (ST-LeWiDi) paradigm. To this end, we will develop algorithms that use sensor-based inputs alongside diverse annotations from human annotators. This approach allows for richer, more robust datasets and models capable of handling subjective, complex tasks such as: (i) hate speech and sexism detection from videos and text transcriptions, (ii) real-time transcription using automatic speech recognition (ASR) for individuals with hearing and speech disabilities (Down syndrome, jointly with the iSocial Foundation) making ASR systems more inclusive.

Web: https://nlp.uned.es/annotate/#about

Image
fairtrans logos