Blocklists constitute a widely-used Internet security mechanism to filter undesired network traffic based on IP/domain reputation and behavior. Many blocklists are distributed in open source form by threat intelligence providers who aggregate and process input from their own sensors, but also from thirdparty feeds or providers. Despite their wide adoption, many open-source blocklist providers lack clear documentation about their structure, curation process, contents, dynamics, and interrelationships with other providers. In this paper, we perform a transparency and content analysis of 2,093 free and open source blocklists with the aim of exploring those questions. To that end, we perform a longitudinal 6-month crawling campaign yielding more than 13.5M unique records. This allows us to shed light on their nature, dynamics, inter-provider relationships, and transparency. Specifically, we discuss how the lack of consensus on distribution formats, blocklist labeling taxonomy, content focus, and temporal dynamics creates a complex ecosystem that complicates their combined crawling, aggregation and use. We also provide observations regarding their generally low overlap as well as acute differences in terms of liveness (i.e., how frequently records get indexed and removed from the list) and the lack of documentation about their data collection processes, nature and intended purpose. We conclude the paper with recommendations in terms of transparency, accountability, and standardization.
Domain classification services have applications in multiple areas,including cybersecurity, content blocking, and targeted advertising.Yet, these services are often a black box in terms of their method-ology to classifying domains, which makes it difficult to assesstheir strengths, aptness for specific applications, and limitations. Inthis work, we perform a large-scale analysis of 13 popular domainclassification services on more than 4.4M hostnames. Our studyempirically explores their methodologies, scalability limitations,label constellations, and their suitability to academic research aswell as other practical applications such as content filtering. Wefind that the coverage varies enormously across providers, rangingfrom over 90% to below 1%. All services deviate from their docu-mented taxonomy, hampering sound usage for research. Further,labels are highly inconsistent across providers, who show littleagreement over domains, making it difficult to compare or combinethese services. We also show how the dynamics of crowd-sourcedefforts may be obstructed by scalability and coverage aspects aswell as subjective disagreements among human labelers. Finally,through case studies, we showcase that most services are not fitfor detecting specialized content for research or content-blockingpurposes. We conclude with actionable recommendations on theirusage based on our empirical insights and experience. Particularly,we focus on how users should handle the significant disparitiesobserved across services both in technical solutions and in research.