TY - CHAP A1 - Struß, Julia Maria A1 - Ruggeri, Federico A1 - Barrón-Cedeño, Alberto A1 - Alam, Firoj A1 - Dimitrov, Dimitar A1 - Galassi, Andrea A1 - Pachov, Georgi A1 - Koychev, Ivan A1 - Nakov, Preslav A1 - Siegel, Melanie A1 - Wiegand, Michael A1 - Hasanain, Maram A1 - Suwaileh, Reem A1 - Zaghouani, Wajdi ED - Faggioli, Guglielmo ED - Ferro, Nicola ED - Galuščáková, Petra ED - Seco de Herrera, Alba García T1 - Overview of the CLEF-2024 CheckThat! Lab Task 2 on Subjectivity in News Articles BT - Notebook for the CheckThat! Lab at CLEF 2024 T2 - CLEF 2024 Working Notes : Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2024) N2 - We present an overview of Task 2 of the seventh edition of the CheckThat! lab at the 2024 iteration of the Conference and Labs of the Evaluation Forum (CLEF). The task focuses on subjectivity detection in news articles and was o ered in five languages: Arabic, Bulgarian, English, German, and Italian, as well as in a multilingual setting. The datasets for each language were carefully curated and annotated, comprising over 10,000 sentences from news articles. The task challenged participants to develop systems capable of distinguishing between subjective statements (refecting personal opinions or biases) and objective ones (presenting factual information) at the sentence level. A total of 15 teams participated in the task, submitting 36 valid runs across all language tracks. The participants used a variety of approaches, with transformer-based models being the most popular choice. Strategies included fine-tuning monolingual and multilingual models, and leveraging English models with automatic translation for the non-English datasets. Some teams also explored ensembles, feature engineering, and innovative techniques such as few-shot learning and in-context learning with large language models. The evaluation was based on macro-averaged F1 score. The results varied across languages, with the best performance achieved for Italian and German, followed by English. The Arabic track proved particularly challenging, with no team surpassing an F1 score of 0.50. This task contributes to the broader goal of enhancing the reliability of automated content analysis in the context of misinformation detection and fact-checking. The paper provides detailed insights into the datasets, participant approaches, and results, o ering a benchmark for the current state of subjectivity detection across multiple languages. KW - Fehlinformation KW - Zeitungsartikel Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0074-3740-3 UR - https://ceur-ws.org/Vol-3740/paper-25.pdf SN - 1613-0073 SP - 287 EP - 298 CY - Frankreich ER - TY - CHAP A1 - Barrón-Cedeño, Alberto A1 - Alam, Firoj A1 - Chakraborty, Tonmoy A1 - Elsayed, Tamer A1 - Nakov, Preslav A1 - Przybyła, Piotr A1 - Struß, Julia Maria A1 - Haouari, Fatima A1 - Hasanain, Maram A1 - Ruggeri, Federico A1 - Song, Xingyi A1 - Suwaileh, Reem ED - Goharian, Nazli ED - Tonellotto, Nicola ED - He, Yulan ED - Lipani, Aldo ED - McDonald, Graham ED - Macdonald, Craig ED - Ounis, Iadh T1 - The CLEF-2024 CheckThat! Lab BT - Check-Worthiness, Subjectivity, Persuasion, Roles, Authorities, and Adversarial Robustness T2 - Advances in Information Retrieval N2 - The first five editions of the CheckThat! lab focused on the main tasks of the information verification pipeline: check-worthiness, evidence retrieval and pairing, and verification. Since the 2023 edition, it has been focusing on new problems that can support the research and decision making during the verification process. In this new edition, we focus on new problems and -for the first time- we propose six tasks in fifteen languages (Arabic, Bulgarian, English, Dutch, French, Georgian, German, Greek, Italian, Polish, Portuguese, Russian, Slovene, Spanish, and code-mixed Hindi-English): Task 1 estimation of check-worthiness (the only task that has been present in all CheckThat! editions), Task 2 identification of subjectivity (a follow up of CheckThat! 2023 edition), Task 3 identification of persuasion (a follow up of SemEval 2023), Task 4 detection of hero, villain, and victim from memes (a follow up of CONSTRAINT 2022), Task 5 Rumor Verification using Evidence from Authorities (a first), and Task 6 robustness of credibility assessment with adversarial examples (a first). These tasks represent challenging classification and retrieval problems at the document and at the span level, including multilingual and multimodal settings. KW - Desinformation KW - Subjektivität KW - Bias KW - Faktizität Y1 - 2024 SN - 978-3-031-56069-9 U6 - https://doi.org/10.1007/978-3-031-56069-9_62 SN - 0302-9743 SP - 449 EP - 458 PB - Springer International Publishing CY - Cham ER - TY - CHAP A1 - Barrón-Cedeño, Alberto A1 - Alam, Firoj A1 - Struß, Julia Maria A1 - Nakov, Preslav A1 - Chakraborty, Tanmoy A1 - Elsayed, Tamer A1 - Przybyła, Piotr A1 - Caselli, Tommaso A1 - Da San Martino, Giovanni A1 - Haouari, Fatima A1 - Hasanain, Maram A1 - Li, Chengkai A1 - Piskorski, Jakub A1 - Ruggeri, Federico A1 - Song, Xingyi A1 - Suwaileh, Reem ED - Goeuriot, Lorraine ED - Mulhem, Philippe ED - Quénot, Georges ED - Schwab, Didier ED - Di Nunzio, Giorgio Maria ED - Soulier, Laure ED - Galušcáková, Petra ED - Seco de Herrera, Alba García ED - Faggioli, Guglielmo ED - Ferro, Nicola T1 - Overview of the CLEF-2024 CheckThat! Lab : Check-Worthiness, Subjectivity, Persuasion, Roles, Authorities, and Adversarial Robustness T2 - Experimental IR Meets Multilinguality, Multimodality, and Interaction : 15th International Conference of the CLEF Association, CLEF 2024, Grenoble, France, September 9–12, 2024, Proceedings, Part II N2 - We describe the seventh edition of the CheckThat! lab, part of the 2024 Conference and Labs of the Evaluation Forum (CLEF). Previous editions of CheckThat! focused on the main tasks of the information verification pipeline: check-worthiness, identifying previously fact-checked claims, supporting evidence retrieval, and claim verification. In this edition, we introduced some new challenges, offering six tasks in fifteen languages (Arabic, Bulgarian, English, Dutch, French, Georgian, German, Greek, Italian, Polish, Portuguese, Russian, Slovene, Spanish, and code-mixed Hindi-English): Task 1 on estimation of check-worthiness (the only task that has been present in all CheckThat! editions), Task 2 on identification of subjectivity (a follow up of the CheckThat! 2023 edition), Task 3 on identification of the use of persuasion techniques (a follow up of SemEval 2023), Task 4 on detection of hero, villain, and victim from memes (a follow up of CONSTRAINT 2022), Task 5 on rumor verification using evidence from authorities (new task), and Task 6 on robustness of credibility assessment with adversarial examples (new task). These are challenging classification and retrieval problems at the document and at the span level, including multilingual and multimodal settings. This year, CheckThat! was one of the most popular labs at CLEF-2024 in terms of team registrations: 130 teams. More than one-third of them (a total of 46) actually participated. KW - Desinformation KW - Fehlinformation Y1 - 2024 SN - 978-3-031-71907-3 U6 - https://doi.org/10.1007/978-3-031-71908-0 SP - 28 EP - 52 PB - Springer Nature CY - Berlin ER -