• Treffer 2 von 485
Zurück zur Trefferliste

The CLEF-2024 CheckThat! Lab

  • The first five editions of the CheckThat! lab focused on the main tasks of the information verification pipeline: check-worthiness, evidence retrieval and pairing, and verification. Since the 2023 edition, it has been focusing on new problems that can support the research and decision making during the verification process. In this new edition, we focus on new problems and -for the first time- we propose six tasks in fifteen languages (Arabic, Bulgarian, English, Dutch, French, Georgian, German, Greek, Italian, Polish, Portuguese, Russian, Slovene, Spanish, and code-mixed Hindi-English): Task 1 estimation of check-worthiness (the only task that has been present in all CheckThat! editions), Task 2 identification of subjectivity (a follow up of CheckThat! 2023 edition), Task 3 identification of persuasion (a follow up of SemEval 2023), Task 4 detection of hero, villain, and victim from memes (a follow up of CONSTRAINT 2022), Task 5 Rumor Verification using Evidence from Authorities (a first), and Task 6 robustness of credibilityThe first five editions of the CheckThat! lab focused on the main tasks of the information verification pipeline: check-worthiness, evidence retrieval and pairing, and verification. Since the 2023 edition, it has been focusing on new problems that can support the research and decision making during the verification process. In this new edition, we focus on new problems and -for the first time- we propose six tasks in fifteen languages (Arabic, Bulgarian, English, Dutch, French, Georgian, German, Greek, Italian, Polish, Portuguese, Russian, Slovene, Spanish, and code-mixed Hindi-English): Task 1 estimation of check-worthiness (the only task that has been present in all CheckThat! editions), Task 2 identification of subjectivity (a follow up of CheckThat! 2023 edition), Task 3 identification of persuasion (a follow up of SemEval 2023), Task 4 detection of hero, villain, and victim from memes (a follow up of CONSTRAINT 2022), Task 5 Rumor Verification using Evidence from Authorities (a first), and Task 6 robustness of credibility assessment with adversarial examples (a first). These tasks represent challenging classification and retrieval problems at the document and at the span level, including multilingual and multimodal settings.zeige mehrzeige weniger

Metadaten exportieren

Weitere Dienste

Suche bei Google Scholar
Metadaten
Verfasserangaben:Alberto Barrón-Cedeño, Firoj Alam, Tonmoy Chakraborty, Tamer Elsayed, Preslav Nakov, Piotr Przybyła, Julia Maria StrußORCiDGND, Fatima Haouari, Maram Hasanain, Federico Ruggeri, Xingyi Song, Reem Suwaileh
DOI:https://doi.org/10.1007/978-3-031-56069-9_62
ISBN:978-3-031-56069-9
ISSN:0302-9743
Titel des übergeordneten Werkes (Englisch):Advances in Information Retrieval
Untertitel (Englisch):Check-Worthiness, Subjectivity, Persuasion, Roles, Authorities, and Adversarial Robustness
Verlag:Springer International Publishing
Verlagsort:Cham
Herausgeber*in:Nazli Goharian, Nicola Tonellotto, Yulan He, Aldo Lipani, Graham McDonald, Craig Macdonald, Iadh Ounis
Dokumentart:Konferenzveröffentlichung
Sprache:Englisch
Jahr der Erstveröffentlichung:2024
Veröffentlichende Institution:Fachhochschule Potsdam
Datum der Freischaltung:27.03.2024
GND-Schlagwort:Desinformation; Subjektivität; Bias; Faktizität
Erste Seite:449
Letzte Seite:458
Fachbereiche und Zentrale Einrichtungen:FB5 Informationswissenschaften
FB5 Informationswissenschaften / Publikationen des FB Informationswissenschaften
DDC-Klassifikation:000 Informatik, Informationswissenschaft, allgemeine Werke / 020 Bibliotheks- und Informationswissenschaften
Lizenz (Deutsch):License LogoCreative Commons - CC BY - Namensnennung 4.0 International
Einverstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.