TY - CHAP A1 - Nakov, Preslav A1 - Da San Martino, Giovanni A1 - Elsayed, Tamer A1 - Barrón-Cedeño, Alberto A1 - Míguez, Rubén A1 - Shaar, Shaden A1 - Alam, Firoj A1 - Haouari, Fatima A1 - Hasanain, Maram A1 - Mansour, Watheq A1 - Hamdan, Bayan A1 - Sheikh Ali, Zien A1 - Babulkov, Nikolay A1 - Nikolov, Alex A1 - Koshore Shahi, Gautam A1 - Struß, Julia Maria A1 - Mandl, Thomas A1 - Kutlu, Mucahid A1 - Selim Kartal, Yavuz T1 - Overview of the CLEF–2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News T2 - Experimental IR Meets Multilinguality, Multimodality, and Interaction N2 - We describe the fourth edition of the CheckThat! Lab, part of the 2021 Conference and Labs of the Evaluation Forum (CLEF). The lab evaluates technology supporting tasks related to factuality, and covers Arabic, Bulgarian, English, Spanish, and Turkish. Task 1 asks to predict which posts in a Twitter stream are worth fact-checking, focusing on COVID-19 and politics (in all five languages). Task 2 asks to determine whether a claim in a tweet can be verified using a set of previously fact-checked claims (in Arabic and English). Task 3 asks to predict the veracity of a news article and its topical domain (in English). The evaluation is based on mean average precision or precision at rank k for the ranking tasks, and macro-F1 for the classification tasks. This was the most popular CLEF-2021 lab in terms of team registrations: 132 teams. Nearly one-third of them participated: 15, 5, and 25 teams submitted official runs for tasks 1, 2, and 3, respectively. KW - Fact-checking KW - Check-worthiness estimation KW - Desinformation KW - Fehlinformation Y1 - 2021 SN - 978-3-030-85251-1 U6 - https://doi.org/10.1007/978-3-030-85251-1_19 SP - 264 EP - 291 PB - Springer CY - Cham ER - TY - CHAP A1 - Nakov, Preslav A1 - Da San Martino, Giovanni A1 - Elsayed, Tamer A1 - Barrón-Cedeño, Alberto A1 - Míguez, Rúben A1 - Shaar, Shaden A1 - Alam, Firoj A1 - Haouari, Fatima A1 - Hasanain, Maram A1 - Babulkov, Nikolay A1 - Nikolov, Alex A1 - Shahi, Gautam Kishore A1 - Struß, Julia Maria A1 - Mandl, Thomas T1 - The CLEF-2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News T2 - Advances in Information Retrieval N2 - We describe the fourth edition of the CheckThat! Lab, part of the 2021 Cross-Language Evaluation Forum (CLEF). The lab evaluates technology supporting various tasks related to factuality, and it is offered in Arabic, Bulgarian, English, and Spanish. Task 1 asks to predict which tweets in a Twitter stream are worth fact-checking (focusing on COVID-19). Task 2 asks to determine whether a claim in a tweet can be verified using a set of previously fact-checked claims. Task 3 asks to predict the veracity of a target news article and its topical domain. The evaluation is carried out using mean average precision or precision at rank k for the ranking tasks, and F1 for the classification tasks. KW - Fake news KW - Fact-checking KW - Falschmeldung KW - Desinformation KW - Fehlinformation Y1 - 2021 SN - 978-3-030-72240-1 U6 - https://doi.org/10.1007/978-3-030-72240-1_75 SP - 639 EP - 649 PB - Springer CY - Cham ER - TY - CHAP A1 - Barrón-Cedeño, Alberto A1 - Alam, Firoj A1 - Galassi, Andrea A1 - Da San Martino, Giovanni A1 - Nakov, Preslav A1 - Elsayed, Tamer A1 - Azizov, Dilshod A1 - Caselli, Tommaso A1 - Cheema, Gullal S. A1 - Haouari, Fatima A1 - Hasanain, Maram A1 - Kutlu, Mucahid A1 - Li, Chengkai A1 - Ruggeri, Federico A1 - Struß, Julia Maria A1 - Zaghouani, Wajdi ED - Arampatzis, Avi ED - Kanoulas, Evangelos ED - Tsikrika, Theodora ED - Vrochidis, Stefanos ED - Giachanou, Anastasia ED - Li, Dan ED - Aliannejadi, Mohammad ED - Vlachos, Michalis ED - Faggioli, Guglielmo ED - Ferro, Nicola T1 - Overview of the CLEF–2023 CheckThat! Lab on Checkworthiness, Subjectivity, Political Bias, Factuality, and Authority of News Articles and Their Source T2 - Experimental IR Meets Multilinguality, Multimodality, and Interaction N2 - We describe the sixth edition of the CheckThat! lab, part of the 2023 Conference and Labs of the Evaluation Forum (CLEF). The five previous editions of CheckThat! focused on the main tasks of the information verification pipeline: check-worthiness, verifying whether a claim was fact-checked before, supporting evidence retrieval, and claim verification. In this sixth edition, we zoom into some new problems and for the first time we offer five tasks in seven languages: Arabic, Dutch, English, German, Italian, Spanish, and Turkish. Task 1 asks to determine whether an item - text or text plus image- is check-worthy. Task 2 aims to predict whether a sentence from a news article is subjective or not. Task 3 asks to assess the political bias of the news at the article and at the media outlet level. Task 4 focuses on the factuality of reporting of news media. Finally, Task 5 looks at identifying authorities in Twitter that could help verify a given target claim. For a second year, CheckThat! was the most popular lab at CLEF-2023 in terms of team registrations: 127 teams. About one-third of them (a total of 37) actually participated. KW - Desinformation KW - Fehlinformation KW - COVID-19 Y1 - 2023 SN - 978-3-031-42448-9 U6 - https://doi.org/10.1007/978-3-031-42448-9_20 SN - 1611-3349 SP - 251 EP - 275 PB - Springer International Publishing CY - Cham ER - TY - CHAP A1 - Galassi, Andrea A1 - Ruggeri, Federico A1 - Barrón-Cedeño, Alberto A1 - Alam, Firoj A1 - Caselli, Tommaso A1 - Kutlu, Mucahid A1 - Struß, Julia Maria A1 - Antici, Francesco A1 - Hasanain, Maram A1 - Köhler, Juliane A1 - Korre, Katerina A1 - Leistra, Folkert A1 - Muti, Arianna A1 - Siegel, Melanie A1 - Türkmen, Mehmet Deniz A1 - Wiegand, Michael A1 - Zaghouani, Wajdi ED - Aliannejadi, Mohammad ED - Faggiolo, Guglielmo ED - Ferro, Nicola ED - Vlachos, Michalis T1 - Overview of the CLEF-2023 CheckThat! Lab: Task 2 on Subjectivity in News Articles BT - Notebook for the CheckThat! Lab at CLEF 2023 T2 - CLEF 2023 Working Notes N2 - We describe the outcome of the 2023 edition of the CheckThat!Lab at CLEF. We focus on subjectivity (Task 2), which has been proposed for the first time. It aims at fostering the technology for the identification of subjective text fragments in news articles. For that, we produced corpora consisting of 9,530 manually-annotated sentences, covering six languages - Arabic, Dutch, English, German, Italian, and Turkish. Task 2 attracted 12 teams, which submitted a total of 40 final runs covering all languages. The most successful approaches addressed the task using state-of-the-art multilingual transformer models, which were fine-tuned on language-specific data. Teams also experimented with a rich set of other neural architectures, including foundation models, zero-shot classifiers, and standard transformers, mainly coupled with data augmentation and multilingual training strategies to address class imbalance. We publicly release all the datasets and evaluation scripts, with the purpose of promoting further research on this topic. KW - Desinformation KW - Fehlinformation KW - COVID-19 Y1 - 2023 UR - https://ceur-ws.org/Vol-3497/paper-020.pdf SP - 236 EP - 249 CY - Thessaloniki ER - TY - CHAP A1 - Barrón-Cedeño, Alberto A1 - Alam, Firoj A1 - Caselli, Tommaso A1 - Da San Martino, Giovanni A1 - Elsayed, Tamer A1 - Galassi, Andrea A1 - Haouari, Fatima A1 - Ruggeri, Federico A1 - Struß, Julia Maria A1 - Nath Nandi, Rabindra A1 - Cheema, Gullal S. A1 - Azizov, Dilshod A1 - Nakov, Preslav T1 - The CLEF-2023 CheckThat! Lab: Checkworthiness, Subjectivity, Political Bias, Factuality, and Authority T2 - Advances in Information Retrieval : 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part III N2 - The five editions of the CheckThat! lab so far have focused on the main tasks of the information verification pipeline: check-worthiness, evidence retrieval and pairing, and verification. The 2023 edition of the lab zooms into some of the problems and - for the first time - it offers five tasks in seven languages (Arabic, Dutch, English, German, Italian, Spanish, and Turkish): Task 1 asks to determine whether an item, text or a text plus an image, is check-worthy; Task 2 requires to assess whether a text snippet is subjective or not; Task 3 looks for estimating the political bias of a document or a news outlet; Task 4 requires to determine the level of factuality of a document or a news outlet; and Task 5 is about identifying authorities that should be trusted to verify a contended claim. KW - Desinformation KW - Fehlinformation KW - Bias Y1 - 2023 SN - 978-3-031-28241-6 U6 - https://doi.org/10.1007/978-3-031-28241-6_59 SN - 0302-9743 SP - 509 EP - 517 PB - Springer CY - Cham ER - TY - CHAP A1 - Barrón-Cedeño, Alberto A1 - Alam, Firoj A1 - Chakraborty, Tonmoy A1 - Elsayed, Tamer A1 - Nakov, Preslav A1 - Przybyła, Piotr A1 - Struß, Julia Maria A1 - Haouari, Fatima A1 - Hasanain, Maram A1 - Ruggeri, Federico A1 - Song, Xingyi A1 - Suwaileh, Reem ED - Goharian, Nazli ED - Tonellotto, Nicola ED - He, Yulan ED - Lipani, Aldo ED - McDonald, Graham ED - Macdonald, Craig ED - Ounis, Iadh T1 - The CLEF-2024 CheckThat! Lab BT - Check-Worthiness, Subjectivity, Persuasion, Roles, Authorities, and Adversarial Robustness T2 - Advances in Information Retrieval N2 - The first five editions of the CheckThat! lab focused on the main tasks of the information verification pipeline: check-worthiness, evidence retrieval and pairing, and verification. Since the 2023 edition, it has been focusing on new problems that can support the research and decision making during the verification process. In this new edition, we focus on new problems and -for the first time- we propose six tasks in fifteen languages (Arabic, Bulgarian, English, Dutch, French, Georgian, German, Greek, Italian, Polish, Portuguese, Russian, Slovene, Spanish, and code-mixed Hindi-English): Task 1 estimation of check-worthiness (the only task that has been present in all CheckThat! editions), Task 2 identification of subjectivity (a follow up of CheckThat! 2023 edition), Task 3 identification of persuasion (a follow up of SemEval 2023), Task 4 detection of hero, villain, and victim from memes (a follow up of CONSTRAINT 2022), Task 5 Rumor Verification using Evidence from Authorities (a first), and Task 6 robustness of credibility assessment with adversarial examples (a first). These tasks represent challenging classification and retrieval problems at the document and at the span level, including multilingual and multimodal settings. KW - Desinformation KW - Subjektivität KW - Bias KW - Faktizität Y1 - 2024 SN - 978-3-031-56069-9 U6 - https://doi.org/10.1007/978-3-031-56069-9_62 SN - 0302-9743 SP - 449 EP - 458 PB - Springer International Publishing CY - Cham ER -