Refine
Document Type
Language
- English (10)
Keywords
- Carcinogenicity (1)
- Contentdistribution networks (1)
- DNA adducts (1)
- Estragole (1)
- Genotoxicity (1)
- Hepatoma (1)
- Liver cells (1)
- Measurements (1)
- TCP congestion control (1)
Institute
High packet rates at ≥ 10 GBit/s challenge the packet processing performance of network stacks. A common solution is to offload (parts of) the user-space packet processing to other execution environments, e.g., into the device driver (kernel-space), the NIC or even from virtual machines into the host operating system (OS), or any combination of those. While common wisdom states that offloading optimizes performance, neither benefits nor negative effects are comprehensively studied. In this paper, we aim to shed light on the benefits and shortcomings of eBPF/XDP-based offloading from the user-space to i) the kernel-space or ii) a smart NIC-including VM virtualization. We show that offloading can indeed optimize packet processing, but only if the task is small and optimized for the target environment. Otherwise, offloading can even lead to detrimental performance.
Hidden Treasures - Recycling Large-Scale Internet Measurements to Study the Internet's Control Plane
(2019)
Standards govern the SHOULD and MUST requirements for protocol implementers for interoperability. In case of TCP that carries the bulk of the Internets' traffic, these requirements are defined in RFCs. While it is known that not all optional features are implemented and nonconformance exists, one would assume that TCP implementations at least conform to the minimum set of MUST requirements. In this paper, we use Internet-wide scans to show how Internet hosts and paths conform to these basic requirements. We uncover a non-negligible set of hosts and paths that do not adhere to even basic requirements. For example, we observe hosts that do not correctly handle checksums and cases of middlebox interference for TCP options. We identify hosts that drop packets when the urgent pointer is set or simply crash. Our publicly available results highlight that conformance to even fundamental protocol requirements should not be taken for granted but instead checked regularly.
Estragole is a natural constituent in herbs and spices and in products thereof such as essential oils or herbal teas. After cytochrome P450-catalyzed hydroxylation and subsequent sulfation, estragole acts as a genotoxic hepatocarcinogen forming DNA adducts in rodent liver. Because of the genotoxic mode of action and the widespread occurrence in food and phytomedicines a refined risk assessment for estragole is needed. We analyzed the time- and concentration-dependent levels of the DNA adducts N2-(isoestragole-3‘-yl)-2‘-desoxyguanosine (E3′N2dG) and N6-(isoestragole-3‘-yl)-desoxyadenosine (E3′N6dA), reported to be the major adducts formed in rat liver, in rat hepatocytes (pRH) in primary culture after incubation with estragole. DNA adduct levels were measured via UHPLC-ESI-MS/MS using stable isotope dilution analysis. Both adducts were formed in pRH and could already be quantified after an incubation time of 1 h (E3′N6dA at 10 μM, E3′N2dG at 1μM estragole). E3′N2dG, the main adduct at all incubation times and concentrations, could be detected at estragole concentrations < 0.1 μM after 24 h and < 0.5 μM after 48 h. Adduct levels were highest after 6 h and showed a downward trend at later time-points, possibly due to DNA repair and/or apoptosis. While the concentration-response characteristics of adduct formation were apparently linear over the whole concentration range, strong indication for marked hypo-linearity was obtained when the modeling was based on concentrations < 1 μM only. In the micronucleus assay no mutagenic potential of estragole was found in HepG2 cells whereas in HepG2-CYP1A2 cells 1 μM estragole led to a 3.2 fold and 300 μM to a 7.1 fold increase in micronuclei counts. Our findings suggest the existence of a ‘practical threshold’ dose for DNA adduct formation as an initiating key event of the carcinogenicity of estragole indicating that the default assumption of concentration-response-linearity is questionable, at least for the two major adducts studied here.
Congestion control (CC) is an indispensable com-ponent of transport protocols to prevent congestion collapseas it distributes the available bandwidth among all competingflows, ideally in a fair manner. It thus has a large impact onperformance and there exists a constantly evolving set of CCalgorithms, each addressing different performance needs. Whilethe algorithms are commonly tested regarding the problemsunderlying their implementation, the interaction with existingalgorithms is often not considered. Additionally considering thefact that content providers (CPs) such as content distributionnetworks (CDNs) are known to tune TCP stacks for performancegains, the large assortment of algorithms opens the door for cus-tom parametrization and potentially unfair bandwidth sharing.In this paper, we thus empirically investigate if current Internettraffic generated by CPs still adheres to the conventional under-standing of fairness. For this, we compare fairness propertiesof testbed hosts to actual traffic of six major CPs subject todifferent queue sizes and queueing disciplines in a home-usersetting. Additionally, we investigate how mice and elephant flowsfrom the different CPs interact. We find that some employedCC algorithms lead to significantly asymmetric bandwidth sharesand very poor flow completion times for mice flows. Fortunately,AQMs such as FQ_CoDel are able to alleviate such unfairness.