FG Rechnernetze und Kommunikationssysteme
Refine
Year of publication
Document Type
Keywords
- QoS (2)
- Benchmark testing (1)
- Clustering and classification (1)
- Co-Simulation (1)
- Communication system security (1)
- Computer security;Industrial control;Industrial power systems;Industrial safety;Management;DCS;ICS;cyber-physical systems;distributed control systems;industrial control systems;security;security management (1)
- Computersicherheit (1)
- Contentdistribution networks (1)
- Emulation (1)
- Grid Computing (1)
Institute
TSN Scheduler Benchmarking
(2023)
Time-Sensitive Networking (TSN) disrupts realtime communication technology by making IEEE Ethernet realtime-capable. For time-triggered, hard realtime traffic, TSN provides standardized mechanisms to reserve communication paths as well as individual transmission time slots for data frames. By leveraging these means in a precomputed network schedule, TSN allows for bounded end-to-end delays and minimal jitter. Not being part of the IEEE standard, corresponding scheduling algorithms are an active field of research. Unfortunately, due to differing model assumptions, evaluation setups, and key metrics, a fair comparison of schedulers is impossible so far. In this paper, we present a systematic and reproducible approach to benchmark TSN schedulers. First, we provide a scheduler taxonomy that enables to cluster schedulers by their characteristics. Second, we analyze interactions of input parameters and scheduler results to derive a benchmarking parcour for quantitative comparisons. Finally, we use the approach to benchmark existing schedulers and show subtle interaction effects. This way, our approach enables—for the first time-comparability between schedulers, fueled by the public availability of our benchmarking scenarios.
The majority of Web content is delivered by only a few companies that provide Content Delivery Infrastructuress (CDIss) such as Content Delivery Networkss (CDNss) and cloud hosts. Due to increasing concerns about trends of centralization, empirical studies on the extent and implications of resulting Internet consolidation are necessary. Thus, we present an empirical view on consolidation of the Web by leveraging datasets from two different measurement platforms. We first analyze Web consolidation around CDIs at the level of landing webpages, before narrowing down the analysis to a level of embedded page resources. The datasets cover 1(a) longitudinal measurements of DNS records for 166.5 M Web domains over five years, 1(b) measurements of DNS records for Alexa Top 1 M over a month and (2) measurements of page loads and renders for 4.3 M webpages, which include data on 392.3 M requested resources. We then define CDIs penetration as the ratio of CDI-hosted objects to all measured objects, which we use to quantify consolidation around CDIs. We observe that CDI penetration has close to doubled since 2015, reaching a lower bound of 15% for all .com, .net, and .org Web domains as of January 2020. Overall, we find a set of six CDIss to deliver the majority of content across all datasets, with these six CDIss being responsible for more than 80% of all 221.9 M CDI-delivered resources (56.6% of all resources in total). We find high dependencies of Web content on a small group of CDIss, in particular, for fonts, ads, and trackers, as well as JavaScript resources such as jQuery. We further observe CDIss to play important roles in rolling out IPv6 and TLS 1.3 support. Overall, these observations indicate a potential oligopoly, which brings both benefits but also risks to the future of the Web.
Characterizing the country-wide adoption and evolution of the Jodel messaging app in Saudi Arabia
(2022)
Social media is subject to constant growth and evolution, yet little is known about their early phases of adoption. To shed light on this aspect, this paper empirically characterizes the initial and country-wide adoption of a new type of social media in Saudi Arabia that happened in 2017. Unlike established social media, the studied network Jodel is anonymous and location-based to form hundreds of independent communities country-wide whose adoption pattern we compare. We take a
detailed and full view from the operators perspective on the temporal and geographical dimension on the evolution of these different communities—from their very first the first months of establishment to saturation. This way, we make the early adoption of a new type of social media visible, a process that is often invisible due to the lack of data covering the first days of a new network.
During the COVID-19 pandemic, many smaller conferences have moved entirely online and larger ones are being held as hybrid events. Even beyond the pandemic, hybrid events reduce the carbon footprint of conference travel and makes events more accessible to parts of the research community that have difficulty traveling long distances, while preserving most advantages of in-person gatherings.
While we have developed a solid understanding of how to design virtual events over the last two years, we are still learning how to properly run hybrid events. We present guidelines and considerations-spanning technology, organization and social factors-for organizing successful hybrid conferences.
This paper summarizes and extends the discussions held at the Dagstuhl seminar on "Climate Friendly Internet Research" held in July 2021.
Distributed Denial of Service (DDoS) attacks are among the most critical cybersecurity threats, jeopardizing the stability of even the largest networks and services. The existing range of mitigation services predominantly filters at the edge of the Internet, thus creating unnecessary burden for network infrastructures. Consequently, we present IXP Scrubber, a Machine Learning (ML) based system for detecting and filtering DDoS traffic at the core of the Internet at Internet Exchange Points (IXPs) which see large volumes and varieties of DDoS. IXP Scrubber continuously learns DDoS traffic properties from neighboring Autonomous Systems (ASes). It utilizes BGP signals to drop traffic for certain routes (blackholing) to sample DDoS and can thus learn new attack vectors without the operator’s intervention and on unprecedented amounts of training data. We present three major contributions: i) a method to semi-automatically generate arbitrarily large amounts of labeled DDoS training data from IXPs’ sampled packet traces, ii) the novel, controllable, locally explainable and highly precise two-step IXP Scrubber ML model, and iii) an evaluation of the IXP Scrubber ML model, including its temporal and geographical drift, based on data from 5 IXPs covering a time span of up to two years.
Industrial Control Systems (ICS) are critical systems to our society. Yet they are less studied given their closed nature and often the unavailability of data. While few studies focus on wide-area SCADA systems, e.g., power or gas distribution networks, mission critical networks that control power generation are not yet studied. To address this gap, we perform the first measurement study of Distributed Control System (DCS) by analyzing traces from all network levels from several operational power plants. We show that DCS networks feature a rather rich application mix compared to wide-area SCADA networks and that applications and sites can be fingerprinted with statistical means. While traces from operational power plants are hard to obtain, we analyze to which extent easier to access training facilities can be used as vantage points. Our study aims to shed light on traffic properties of critical industries that were not yet analyzed given the lack of data.
Geographic Differences in Social Media Interactions Exist Between Western and Middle-East Countries
(2022)
In this paper, we empirically analyze two examples of a Western (DE) versus Middle-East (SA) Online Social Messaging App. By focusing on the system interactions over time in comparison, we identify inherent differences in user engagement. We take a deep dive and shed light onto differences in user attention shifts and showcase their structural implications to the user experience. Our main findings show that in comparison to the German counterparts, the Saudi communities prefer creating content in longer conversations, while voting more conservative.
We study the extent to which emoji can be used to add interpretability to embeddings of text and emoji. To do so, we extend the POLAR-framework that transforms word embeddings to interpretable counterparts and apply it to word-emoji embeddings trained on four years of messaging data from the Jodel social network. We devise a crowdsourced human judgement experiment to study six usecases, evaluating against words only, what role emoji can play in adding interpretability to word embeddings. That is, we use a revised POLAR approach interpreting words and emoji with words, emoji or both according to human judgement. We find statistically significant trends demonstrating that emoji can be used to interpret other emoji very well.
In this paper, we study what users talk about in a plethora of independent hyperlocal and anonymous online communities in a single country: Saudi Arabia (KSA). We base this perspective on performing a content classification of the Jodel network in the KSA. To do so, we first contribute a content classification schema that assesses both the intent (why) and the topic (what) of posts. We use the schema to label 15k randomly sampled posts and further classify the top 1k hashtags. We observe a rich set of benign (yet at times controversial in conservative regimes) intents and topics that dominantly address information requests, entertainment, or dating/flirting. By comparing two large cities (Riyadh and Jeddah), we further show that hyperlocality leads to shifts in topic popularity between local communities. By evaluating votes (content appreciation) and replies (reactions), we show that the communities react differently to different topics; e.g., entertaining posts are much appreciated through votes, receiving the least replies, while beliefs & politics receive similarly few replies but are controversially voted.
Blocklists constitute a widely-used Internet security mechanism to filter undesired network traffic based on IP/domain reputation and behavior. Many blocklists are distributed in open source form by threat intelligence providers who aggregate and process input from their own sensors, but also from thirdparty feeds or providers. Despite their wide adoption, many open-source blocklist providers lack clear documentation about their structure, curation process, contents, dynamics, and interrelationships with other providers. In this paper, we perform a transparency and content analysis of 2,093 free and open source blocklists with the aim of exploring those questions. To that end, we perform a longitudinal 6-month crawling campaign yielding more than 13.5M unique records. This allows us to shed light on their nature, dynamics, inter-provider relationships, and transparency. Specifically, we discuss how the lack of consensus on distribution formats, blocklist labeling taxonomy, content focus, and temporal dynamics creates a complex ecosystem that complicates their combined crawling, aggregation and use. We also provide observations regarding their generally low overlap as well as acute differences in terms of liveness (i.e., how frequently records get indexed and removed from the list) and the lack of documentation about their data collection processes, nature and intended purpose. We conclude the paper with recommendations in terms of transparency, accountability, and standardization.
In this work, we predict the user lifetime within the anonymous and location-based social network Jodel in the Kingdom of Saudi Arabia. Jodel's location-based nature yields to the establishment of disjoint communities country-wide and enables for the first time the study of user lifetime in the case of a large set of disjoint communities. A user's lifetime is an important measurement for evaluating and steering customer bases as it can be leveraged to predict churn and possibly apply suitable methods to circumvent potential user losses. We train and test off the shelf machine learning techniques with 5-fold crossvalidation to predict user lifetime as a regression and classification problem; identifying the Random Forest to provide very strong results. Discussing model complexity and quality trade-offs, we also dive deep into a time-dependent feature subset analysis, which does not work very well; Easing up the classification problem into a binary decision (lifetime longer than timespan ) enables a practical lifetime predictor with very good performance. We identify implicit similarities across community models according to strong correlations in feature importance. A single countrywide model generalizes the problem and works equally well for any tested community; the overall model internally works similar to others also indicated by its feature importances.
DDoS attacks remain a major security threat to the continuous operation of Internet edge infrastructures, web services, and cloud platforms. While a large body of research focuses on DDoS detection and protection, to date we ultimately failed to eradicate DDoS altogether. Yet, the landscape of DDoS attack mechanisms is even evolving, demanding an updated perspective on DDoS attacks in the wild. In this paper, we identify up to 2608 DDoS amplification attacks at a single day by analyzing multiple Tbps of traffic flows at a major IXP with a rich ecosystem of different networks. We observe the prevalence of well-known amplification attack protocols (e.g., NTP, CLDAP), which should no longer exist given the established mitigation strategies. Nevertheless, they pose the largest fraction on DDoS amplification attacks within our observation and we witness the emergence of DDoS attacks using recently discovered amplification protocols (e.g., OpenVPN, ARMS, Ubiquity Discovery Protocol). By analyzing the impact of DDoS on core Internet infrastructure, we show that DDoS can overload backbone-capacity and that filtering approaches in prior work omit 97% of the attack traffic.
In this paper, we report on a measurement study by researchers from several institutions that collected and analyzed network data to assess the impact of the first wave of COVID-19 (February-June 2020) on the Internet traffic. The datasets from Internet Service Providers, Internet Exchange Points, and academic networks, primarily in Europe, provide a unique view on the changes of Internet traffic due to pandemic and the lockdown that forced hundreds of millions of citizens to stay and work from home. The analysis shows that the increase of Internet traffic was about 15-20 % within a couple of weeks, an increase that is typically spread over multiple months under typical operation. However, traffic during peak hours does not increase by more than 5 %. The increase was noticeably higher for specific applications, e.g., remote work applications, teleconferencing, video on demand; in some cases up to 200 %. However, overall, the Internet reacted well to these unprecedented times.
This tools track paper presents the design of a questionnaire methodology to assess the participants experience of virtual conferences. This survey approach consists of a preconference questionnaire assessing participation goals and expectations and a post-conference questionnaire assessing the actual participation and related experiences. It enables a data-driven investigation of participants' expectations, goals, attitudes, actual experiences, and general feedback about virtual conferences. As such, it can help to better understand how virtual conference experiences can be improved in the future and how the virtual format can become a more attractive alternative, also in non-pandemic times. The questionnaire was used at three conferences and two workshops. Despite a missing validation, we released it early to foster research on virtual conferences.
United We Stand: Collaborative Detection and Mitigation of Amplification DDoS Attacks at Scale
(2021)
Corona-Warn-App: Tracing the Start of the Official COVID-19 Exposure Notification App for Germany
(2020)
On June 16, 2020, Germany launched an open-source smartphone contact tracing app ("Corona-Warn-App") to help tracing SARS-CoV-2 (coronavirus) infection chains. It uses a decentralized, privacy-preserving design based on the Exposure Notification APIs in which a centralized server is only used to distribute a list of keys of SARS-CoV-2 infected users that is fetched by the app once per day. Its success, however, depends on its adoption. In this poster, we characterize the early adoption of the app using Netflow traces captured directly at its hosting infrastructure. We show that the app generated traffic from allover Germany---already on the first day. We further observe that local COVID-19 outbreaks do not result in noticeable traffic increases.
We train word-emoji embeddings on large scale messagingdata obtained from the Jodel online social network. Our dataset contains more than 40 million sentences, of which 11 million sentences are annotated with a subset of the Unicode13.0 standard Emoji list. We explore semantic emoji associations contained in this embedding by analyzing associations between emojis, between emojis and text, and betweentext and emojis. Our investigations demonstrate anecdotallythat word-emoji embeddings trained on large scale messaging data can reflect real-world semantic associations. To enable further research we release the Jodel Emoji EmbeddingDataset (JEED1488) containing 1488 emojis and their embeddings along 300 dimensions.
General Knapsack Bounds of Web Caching Performance Regarding the Properties of each Cacheable Object
(2020)
Caching strategies have been evaluated and compared in many studies, most often via simulation, but also in analytic methods. Knapsack solutions provide a general analytical approach for upper bounds on web caching performance. They assume objects of maximum (value/size) ratio being selected as cache content, with flexibility to define the caching value. Therefore the popularity, cost, size, time-to-live restrictions etc. per object can be included an overall caching goal, e.g., for reducing delay and/or transport path length in content delivery. The independent request model (IRM) leads to basic knapsack bounds for static optimum cache content. We show that a 2-dimensional (2D-)knapsack solution covers arbitrary request pattern, which selects dynamically changing content yielding maximum caching value for any predefined request sequence. Moreover, Belady's optimum strategy for clairvoyant caching is identified as a special case of our 2D-knapsack solution when all objects are unique. We also summarize a comprehensive picture of the demands and efficiency criteria for web caching, including updating speed and overheads. Our evaluations confirm significant performance gaps from LRU to advanced GreedyDual and score-based web caching methods and to the knapsack bounds.
Standards govern the SHOULD and MUST requirements for protocol implementers for interoperability. In case of TCP that carries the bulk of the Internets' traffic, these requirements are defined in RFCs. While it is known that not all optional features are implemented and nonconformance exists, one would assume that TCP implementations at least conform to the minimum set of MUST requirements. In this paper, we use Internet-wide scans to show how Internet hosts and paths conform to these basic requirements. We uncover a non-negligible set of hosts and paths that do not adhere to even basic requirements. For example, we observe hosts that do not correctly handle checksums and cases of middlebox interference for TCP options. We identify hosts that drop packets when the urgent pointer is set or simply crash. Our publicly available results highlight that conformance to even fundamental protocol requirements should not be taken for granted but instead checked regularly.
Public Key Infrastructures (PKIs) with their trusted Certificate Authorities (CAs) provide the trust backbone for the Internet: CAs sign certificates which prove the identity of servers, applications, or users. To be trusted by operating systems and browsers, a CA has to undergo lengthy and costly validation processes. Alternatively, trusted CAs can cross-sign other CAs to extend their trust to them. In this paper, we systematically analyze the present and past state of cross-signing in the Web PKI. Our dataset (derived from passive TLS monitors and public CT logs) encompasses more than 7 years and 225 million certificates with 9.3 billion trust paths. We show benefits and risks of cross-signing. We discuss the difficulty of revoking trusted CA certificates where, worrisome, cross-signing can result in valid trust paths to remain after revocation; a problem for non-browser software that often blindly trusts all CA certificates and ignores revocations. However, cross-signing also enables fast bootstrapping of new CAs, e.g., Let's Encrypt, and achieves a non-disruptive user experience by providing backward compatibility. In this paper, we propose new rules and guidance for cross-signing to preserve its positive potential while mitigating its risks.
Congestion control (CC) is an indispensable com-ponent of transport protocols to prevent congestion collapseas it distributes the available bandwidth among all competingflows, ideally in a fair manner. It thus has a large impact onperformance and there exists a constantly evolving set of CCalgorithms, each addressing different performance needs. Whilethe algorithms are commonly tested regarding the problemsunderlying their implementation, the interaction with existingalgorithms is often not considered. Additionally considering thefact that content providers (CPs) such as content distributionnetworks (CDNs) are known to tune TCP stacks for performancegains, the large assortment of algorithms opens the door for cus-tom parametrization and potentially unfair bandwidth sharing.In this paper, we thus empirically investigate if current Internettraffic generated by CPs still adheres to the conventional under-standing of fairness. For this, we compare fairness propertiesof testbed hosts to actual traffic of six major CPs subject todifferent queue sizes and queueing disciplines in a home-usersetting. Additionally, we investigate how mice and elephant flowsfrom the different CPs interact. We find that some employedCC algorithms lead to significantly asymmetric bandwidth sharesand very poor flow completion times for mice flows. Fortunately,AQMs such as FQ_CoDel are able to alleviate such unfairness.
Domain classification services have applications in multiple areas,including cybersecurity, content blocking, and targeted advertising.Yet, these services are often a black box in terms of their method-ology to classifying domains, which makes it difficult to assesstheir strengths, aptness for specific applications, and limitations. Inthis work, we perform a large-scale analysis of 13 popular domainclassification services on more than 4.4M hostnames. Our studyempirically explores their methodologies, scalability limitations,label constellations, and their suitability to academic research aswell as other practical applications such as content filtering. Wefind that the coverage varies enormously across providers, rangingfrom over 90% to below 1%. All services deviate from their docu-mented taxonomy, hampering sound usage for research. Further,labels are highly inconsistent across providers, who show littleagreement over domains, making it difficult to compare or combinethese services. We also show how the dynamics of crowd-sourcedefforts may be obstructed by scalability and coverage aspects aswell as subjective disagreements among human labelers. Finally,through case studies, we showcase that most services are not fitfor detecting specialized content for research or content-blockingpurposes. We conclude with actionable recommendations on theirusage based on our empirical insights and experience. Particularly,we focus on how users should handle the significant disparitiesobserved across services both in technical solutions and in research.
Due to the COVID-19 pandemic, many governments imposed lock downs that forced hundreds of millions of citizens to stay at home. The implementation of confinement measures increased Internet traffic demands of residential users, in particular, for remote working, entertainment, commerce, and education, which, as a result, caused traffic shifts in the Internet core. In this paper, using data from a diverse set of vantage points (one ISP, three IXPs, and one metropolitan educational network), we examine the effect of these lockdowns on traffic shifts. We find that the traffic volume increased by 15-20% almost within a week--while overall still modest, this constitutes a large increase within this short time period. However, despite this surge, we observe that the Internet infrastructure is able to handle the new volume, as most traffic shifts occur outside of traditional peak hours. When looking directly at the traffic sources, it turns out that, while hypergiants still contribute a significant fraction of traffic, we see (1) a higher increase in traffic of non-hypergiants, and (2) traffic increases in applications that people use when at home, such as Web conferencing, VPN, and gaming. While many networks see increased traffic demands, in particular, those providing services to residential users, academic networks experience major overall decreases. Yet, in these networks, we can observe substantial increases when considering applications associated to remote working and lecturing.
Quality of Experience is traditionally evaluated byusing short stimuli usually representing parts orsingleusageepisodes. This opens the question on how the overall serviceperception involvingmultipleusage episodes can be evaluated—a question of high practical relevance to service operators.Despite initial research on this challenging aspect of multi-episodic perceived quality, the question of the underlying qualityformation processes and its factors are still to be discovered.We present a multi-episodic experiment of an Audio-on-Demand service over a usage period of 6 days with 93 par-ticipants. Our work directly extends prior work investigating theimpact of time between usage episodes. The results show similareffects — also the recency effect is not statistically significant.In addition, we extend prediction of multi-episodic judgments byaccounting for the observed saturation.
High packet rates at ≥ 10 GBit/s challenge the packet processing performance of network stacks. A common solution is to offload (parts of) the user-space packet processing to other execution environments, e.g., into the device driver (kernel-space), the NIC or even from virtual machines into the host operating system (OS), or any combination of those. While common wisdom states that offloading optimizes performance, neither benefits nor negative effects are comprehensively studied. In this paper, we aim to shed light on the benefits and shortcomings of eBPF/XDP-based offloading from the user-space to i) the kernel-space or ii) a smart NIC-including VM virtualization. We show that offloading can indeed optimize packet processing, but only if the task is small and optimized for the target environment. Otherwise, offloading can even lead to detrimental performance.
Hidden Treasures - Recycling Large-Scale Internet Measurements to Study the Internet's Control Plane
(2019)
Die zunehmende Digitalisierung auch in Kraftwerken erfordert entsprechend auf diesen Bereich zugeschnittene IT-Sicherheitsverfahren und -technologien. Aktuell stellt die obligatorische Implementierung eines angemessenen Sicherheitsprozesses, wie eines Informationssicherheitsmanagementsystems (ISMS), Betreiber von Erzeugungsanlagen, Energienetzen und anderen kritischen Infrastrukturen vor eine große Herausforderung, denn es gilt, zahlreiche Anforderungen bei der Implementierung zu erfüllen, ohne dass konkrete Methoden zur Umsetzung verfügbar sind. Gleichzeitig ist derEinsatz von IT-Sicherheitstechnologien aus der Bürowelt aufgrund gänzlich anderer Eigenschaften und Anforderungen industrieller Netze kaum möglich. Zur Adressierung dieser Problematik wurden von der Brandenburgischen Technischen Universität Cottbus-Senftenberg (BTU) und der Lausitz Energie Kraftwerke AG (LEAG) innovative Verfahren zur (1) systematischen Bewertung sowie (2) Verbesserung der IT-Sicherheit in hochsensiblen Netzen konzipiert und prototypisch implementiert, die
ein diesem Beitrag mit den zugehörigen Forschungsprojekten vorgestellt werden. Um eine Beeinflussung empfindlicher Kraftwerksanlagen auszuschließen, wurde für alle Methoden ein nichtintrusiver Entwicklungsansatz verfolgt, der sich durch eine passive und vom jeweiligen industriellen Prozess vollständig entkoppelte Analyse auszeichnet und sich daher für einen breiten Einsatz in Netzen mit hohen Verfügbarkeitsanforderungen eignet.
In recent years, the amount of traffic protected
with Transport Layer Security (TLS) has significantly increased
and new protocols such as HTTP/2 and QUIC further foster
this emerging trend. However, protecting traffic with TLS has
significant impacts on network entities. While the restrictions for
middleboxes have been extensively studied, addressing the impact
of TLS on clients and servers has been mostly neglected so far.
Especially mobile clients in emerging 5G and IoT deployments
suffer from significantly increased latency, traffic, and energy
overheads when protecting traffic with TLS. In this paper,
we address this emerging topic by thoroughly analyzing the
impact of TLS on clients and servers and derive opportunities
for significantly decreasing latency of TLS communication and
downsizing TLS management traffic, thereby also reducing TLSinduced
server load. We propose a protocol compatible redesign
of TLS session management to use these opportunities and
showcase their potential based on mobile device traffic and mobile
web-browsing traces. These show promising potentials for latency
improvements by up to 25.8% and energy savings of up to 26.3%.
Fast and Efficient Web Caching Methods Regarding the Size and Performance Measures per Data Object
(2019)
Caching methods are developed since 50 years for paging in CPU and database systems, and since 25 years for web caching as main application areas among others. Pages of unique size are usual in CPU caches, whereas web caches are storing data chunks of different size in a widely varying range. We study the impact of different object sizes on the performance and the overhead of web caching. This entails different caching goals, starting from the byte and object hit ratio to a generalized value hit ratio for optimized costs and benefits of caching regarding traffic engineering (TE), reduced delays and other QoS measures. The selection of the cache contents turns out to be crucial for the web cache efficiency with awareness of the size and other properties in a score for each object. We introduce a new class of rank exchange caching methods and show how their performance compares to other strategies with extensions needed to include the size and scores for QoS and TE caching goals. Finally, we derive bounds on the object, byte and value hit ratio for the independent request model (IRM) based on optimum knapsack solutions of the cache content.
Current developments in digitization and industry 4.0 bear new challenges for automation systems. In order to enable interoperability and vertical integration of corporate management systems, these networks have evolved from formerly proprietary solutions to the application of Ethernet-based communication and internet standards. This development is accompanied by an increase in the number of threats. Although the most critical IT protection objective for automation systems is availability, usually no security mechanisms have been integrated into automation protocols. Also Ethernet offers no protection by design for these protocols. One of the most popular real-time protocols for industrial applications is Profinet IO. In this paper, we describe a Denial-of-Service attack on Profinet IO that exploits a vulnerability in the Discovery and Basic Configuration Protocol (DCP) which interrupts the Application Relationship between an IO Controller and an IO Device, and thus prevents the system from being repaired by the operator. The attack combines port stealing with the sending of forged DCP packets and causes a system downtime, which in affected production networks probably lead to a serious financial damage and, in case of critical infrastructures, even represents a high risk for the supply of society. We demonstrate the practical feasibility of the attack using realistic hardware and scenarios and discuss its significance for also other setups.
Low-rate and low-power wireless communications are still the main drivers for innovative industrial automation and the Internet of Things (IoT). Physical mobility is one of the most important challenges for them. Common wireless technologies and protocols, e.g., WirelessHART and ISA100.11a Wireless for industrial process plants, ZigBee for building automation, or 6LoWPAN and 6TiSCH in context of the IoT, are based on the IEEE 802.15.4 standard. Event-based simulation is the method of choice for analyzing network protocols and algorithmic applications of such distributed sensor applications. Performance measurements and holistic evaluations, however, are greatly influenced by the underlying hardware resources, physical layer protocols, and radio channel conditions, which are usually not considered or highly abstracted in network simulations. In this paper we present SEmulate, a hybrid system for seamless (network) simulation and hardware-based emulation for wireless sensor networks based on the IEEE 802.15.4 protocol standard, which takes the hardware aspects into account by applying an Hardware-in-the-Loop (HIL) approach.