Filtern
Dokumenttyp
Sprache
- Englisch (18)
Schlagworte
- Amazon MTurk (1)
- Effect Size (1)
- Empirical dynamic models (1)
- Filtering (1)
- GDPR (1)
- General Data Protection Regulation (GDPR) (1)
- Reliability (1)
- Survey Research (1)
- Validity (1)
- ad attention (1)
Institut
Ad avoidance (e.g., “blinding out” digital ads) is a substantial problem for advertisers. Avoiding mobile banner ads differs from active ad avoidance in nonmobile (desktop) settings, because mobile phone users interact with ads to avoid them: (1) They classify new content at the bottom of their screens; if they see an ad, they (2) scroll so that it is out of the locus of attention and (3) position it at a peripheral location at the top of the screen while focusing their attention on the (non-ad) content in the screen center. Introducing viewport logging to marketing research, we capture granular ad-viewing patterns from users’ screens (i.e., viewports). While mobile users’ ad-viewing patterns are concave over the viewport (with more time at the periphery than in the screen center), viewing patterns on desktop computers are convex (most time in the screen center). Consequently, we show that the effect of viewing time on recall depends on the position of an ad in interaction with the device. An eye-tracking study and an experiment show that 43% to 46% of embedded mobile banner ads are likely to suffer from ad avoidance, and that ad recall is 6 to 7 percentage points lower on mobile phones (versus desktop).
Today’s retailers have a strategic imperative to integrate their channels. Some have implemented electronic shelf labels (ESL) to replace paper tags to technologically enable the omnichannel transformation by aligning the presentation of price and product information between online and offline channels. However, consumer reactions to ESL are yet unexplored. They could be positive or negative: on one hand, the fear of frequent price changes, a known phenomenon in e-commerce, could spread to offline channels and reduce consumer purchase intent and overall revenue; on the other hand, ESL could prevent showrooming by signaling price consistency and offering consistent information (e.g., including reviews) between the on- and offline channels. We explore a retailer data set that allows isolating the “mere ESL effect”, as the retailer’s pricing strategy remained unchanged over the introduction of ESL (i.e., no dynamic pricing), but the presentation of the price and product information was integrated through ESL. A difference-in-difference analysis establishes that revenue in product categories in which ESL was introduced grows at the expense of those product categories in which it was not introduced. Visitor numbers are not affected by introducing ESL. This finding supports the adoption of e-commerce capabilities in a brick-and-mortar store as it could help prevent shopper behavior aimed at exploiting channel differences (i.e., showrooming for price or more information).
Promoting Price Discounts across Channels: The Role of Discount Level and Product Sales Frequency
(2023)
The most common promotional activity are price discounts, and retailers have to choose whether to further promote their price discounts with ads in offline (e.g., print) or online (e.g., banner ads) channels. However, retail managers lack guidance for which promotional channel (offline or online) would support price discounts best and what types of products (high or low sales frequency) would benefit most. Using a field experiment, we disentangle the interacting effects of price discounts (at various discount levels) and their supporting promotional channel. We find that digi-tal promotions of price discounts are more effective than non-digital print campaigns to increase sales beyond the base price discount effect. In addition, the product’s sales frequency matters: rela-tively, digital promotions best support price-discounted low-sales-frequency products, and steeply discounted high-sales-frequency products receive additional support from offline ads.
Challenging the Location Paradigm: Parsimoniously Predicting Store Performance with Urban Scaling
(2021)
Location is considered the most important driver of retail store performance; hence, retailers invest in extensive location research, utilizing expensive rich data. This research proposes an alternative, singular, freely obtainable predictor for location potentials. Drawing on the system scaling literature outside of marketing, we suggest that measures of the urban scale in a store’s trading area are a substitute for a multitude of traditional location measures, such as demographic or socio-economic variables. We demonstrate that our scale measure, the route factor calculated from road map data, performs on par with a common set of traditional predictors in a large dataset of supermarket sales. Moreover, our theory correctly predicts, and the analysis shows, a collinearity problem that has gone unrecognized in traditional store performance models. We validate our approach on a second variety-store dataset that covers a wider range of location conditions for generalizability.
Many empirical studies filter participants (e.g., for incorrect attention checks or quick re-sponses), especially when using participant pools such as Amazon MTurk. Yet, there is no consensus on whether and how to filter. This might originate from different perspectives on filtering participants: it may be evaluated positively (e.g., as it might be necessary to prevent inattentive participants from biasing results) or negatively (e.g., as it may enable p-hacking). This research aims to bridge these opposites: first, we empirically compare the effects of different filters and filter levels on validity, reliability, power and effects sizes of the results.
Second, we introduce the Filter Curve and our R-package “FiltR” as a means to recognize filtering which might be used to p-hack results. We suggest that filtering is not per se bad – although some filters decrease reliability and validity – but that researchers should be trans-parent in how sensitive results are for different filter combinations.
Fluent Contextual Image Backgrounds Enhance Mental Imagery and Evaluations of Experience Products
(2018)
Information Systems research continues to rely on survey participants from crowdsourcing platforms (e.g., Amazon MTurk). Satisficing behavior of these survey participants may reduce attention and threaten validity. To address this, the current research paradigm mandates excluding participants through filtering heuristics (e.g.,
time, instructional manipulation checks). Yet, both the selection of the filter and the filtering threshold are not standardized. This flexibility may lead to suboptimal filtering and potentially “p-hacking”, as researchers can pick the most “successful” filter. This research is the first to tests a comprehensive set of established and new filters against key metrics (validity, reliability, effect size, power). Additionally, we introduce a multivariate machine learning approach to identify inattentive participants. We find that while filtering heuristics require high filter levels (33% or 66% of participants), machine
learning filters are often superior, especially at lower filter levels. Their “black box” character may also help prevent strategic filtering.