TY - RPRT A1 - Bean, Andrew M. A1 - Kearns, Ryan Othniel A1 - Romanou, Angelika A1 - Hafner, Franziska Sofia A1 - Mayne, Harry A1 - Batzner, Jan A1 - Foroutan, Negar A1 - Schmitz, Chris A1 - Korgul, Karolina A1 - Batra, Hunar A1 - Deb, Oishi A1 - Beharry, Emma A1 - Emde, Cornelius A1 - Foster, Thomas A1 - Gausen, Anna A1 - Grandury, María A1 - Han, Simeng A1 - Hofmann, Valentin A1 - Ibrahim, Lujain A1 - Kim, Hazel A1 - Kirk, Hannah Rose A1 - Lin, Fangru A1 - Liu, Gabrielle Kaili-May A1 - Luettgau, Lennart A1 - Magomere, Jabez A1 - Rystrøm, Jonathan A1 - Sotnikova, Anna A1 - Yang, Yushi A1 - Zhao, Yilun A1 - Bibi, Adel A1 - Bosselut, Antoine A1 - Clark, Ronald A1 - Cohan, Arman A1 - Foerster, Jakob A1 - Gal, Yarin A1 - Hale, Scott A. A1 - Raji, Inioluwa Deborah A1 - Summerfield, Christopher A1 - Torr, Philip H. S. A1 - Ududec, Cozmin A1 - Rocher, Luc A1 - Mahdi, Adam T1 - Measuring what Matters: Construct Validity in Large Language Model Benchmarks N2 - Evaluating large language models (LLMs) is crucial for both assessing their capabilities and identifying safety or robustness issues prior to deployment. Reliably measuring abstract and complex phenomena such as ‘safety’ and ‘robustness’ requires strong construct validity, that is, having measures that represent what matters to the phenomenon. With a team of 29 expert reviewers, we conduct a systematic review of 445 LLM benchmarks from leading conferences in natural language processing and machine learning. Across the reviewed articles, we find patterns related to the measured phenomena, tasks, and scoring metrics which undermine the validity of the resulting claims. To address these shortcomings, we provide eight key recommendations and detailed actionable guidance to researchers and practitioners in developing LLM benchmarks. Y1 - 2025 U6 - https://doi.org/10.48550/arXiv.2511.04703 PB - arXiv ER -