TY - GEN A1 - Schmidt, Lennard A1 - Dost, Florian A1 - Maier, Erik T1 - Filtering Survey Responses from Crowdsourcing Platforms: Current Heuristics and Alternative Approaches T2 - International Conference on Information Systems N2 - Information Systems research continues to rely on survey participants from crowdsourcing platforms (e.g., Amazon MTurk). Satisficing behavior of these survey participants may reduce attention and threaten validity. To address this, the current research paradigm mandates excluding participants through filtering heuristics (e.g., time, instructional manipulation checks). Yet, both the selection of the filter and the filtering threshold are not standardized. This flexibility may lead to suboptimal filtering and potentially “p-hacking”, as researchers can pick the most “successful” filter. This research is the first to tests a comprehensive set of established and new filters against key metrics (validity, reliability, effect size, power). Additionally, we introduce a multivariate machine learning approach to identify inattentive participants. We find that while filtering heuristics require high filter levels (33% or 66% of participants), machine learning filters are often superior, especially at lower filter levels. Their “black box” character may also help prevent strategic filtering. KW - Survey Research KW - Filtering KW - Amazon MTurk KW - Reliability KW - Validity KW - Effect Size Y1 - 2021 UR - https://opus4.kobv.de/opus4-UBICO/frontdoor/index/index/docId/27513 UR - https://aisel.aisnet.org/icis2019/research_methods/research_methods/8 ER -