Refine
Year of publication
Document Type
- Doctoral Thesis (15)
Has Fulltext
- yes (15)
Keywords
- ANTARES (2)
- KM3NeT (2)
- Neutrino (2)
- astroparticle physics (2)
- neutrino (2)
- neutrino telescope (2)
- AGN (1)
- Astronomy (1)
- Astroparticlephysics, Pattern recognition (1)
- Astroteilchenphysik (1)
Institute
One of the most important questions in neutrino astronomy is the composition of the neutrino flux with respect to the neutrino flavor, which gives hints to neutrino production mechanisms in distant astrophysical objects.
The neutrino flavor composition can be determined by measuring spectra of neutrino properties.
This is one goal pursued by KM3NeT/ARCA, a cubic-kilometer water-\cherenkov detector in the Mediterranean Sea.
It is currently built by the KM3NeT collaboration.
The degree to which ARCA can contribute to this task is significantly depending on the knowledge about atmospheric background and the performance in classifying of events.
The subject of this thesis was to estimate the performance of ARCA with regards to the flavor composition.
Key steps at the analysis were developed and evaluated.
Two main developmental results have been achieved.
Firstly, the identification of events in topology-based classes was developed to produce independent subsamples of events for the final spectral fit, e.g. no event is used in two different subsamples.
The spectral fit itself is the second development, which directly leads to the value of flavor composition.
Furthermore, the background estimate of atmospheric muons and neutrinos is one key criterion for the sensitivity to an astrophysical neutrino signal.
Therefore, first steps were initiated to estimate the underlying uncertainties in the atmospheric background with a preliminary full particle simulation.
As part of this work, the atmospheric muon background was enlightened in the sense of uncertainties in characteristic quantities, like the muon multiplicity.
Therefore, different simulation packages were used and differences between them were treated as systematic uncertainties.
At first the commonly used MUPAGE was compared to the newly developed interaction models used in CORSIKA.
The KM3NeT collaboration currently builds up the setting of a large production with CORSIKA.
In the context of this thesis, values of input parameters were fixed.
Worth mentioning here is the modeling of the atmospheric conditions above the KM3NeT sites, which turned out to be quite different for ARCA and ORCA because the climate at the ORCA site is more variable with the seasons.
The variation from summer to winter, for the atmospheric depth between 20 km and 40 km, lies around 10% for the ARCA site but at 15% for the ORCA site.
Future analyses will correlate this effect to the muon rate reaching the detectors.
The comparison of different hadronic interaction models as well as the cosmic-ray flux models showed that they introduce a uncertainty of the order of a few percent.
However, the models themselves are stated with uncertainties of up to 20%.
The event spectrum of low muon multiplicity simulated with MUPAGE is in good agreement with the one simulated with CORSIKA.
The high muon multiplicity bundles are overestimated by MUPAGE with respect to CORSIKA.
Otherwise, measurements show that CORSIKA models underestimate the flux.
Other physical properties also do not vary with regards to different models and their uncertainties.
On the whole, the uncertainty in the interval accessible to this study (up 1e5 GeV) was shown to be in the order of 30% for the energy spectrum.
Another indispensable ingredient for all physics analyses in KM3NeT is the classification of events.
In this thesis, a neural network with five target classes was developed.
The classes were motivated by the topology of the light distribution initiated by neutrino events inside KM3NeT.
There are three track-like classes.
Each depends on the initial neutrino direction and the first interaction vertex.
Additionally, there is one cascade class and a so-called double-bang class consisting of high-energy tau neutrinos.
One sees the very low count rate of this very rare event topology of double bang, which results in a high contamination from atmospheric background and equal composition of other misidentified events after the classification.
Other classes show a by far better classification accuracy.
For cascades and up-going tracks, a fraction of correctly classified events above 90% was achieved in the complete considered energy interval.
The purity with respect to other neutrinos is as well at a level above 90%.
Finally, an analysis of the sensitivity of ARCA to the neutrino flavor composition was performed.
Therefore, the output of the event identification was used to built subsamples consisting of topological similar events.
For each subsample, a tailored atmospheric muon rejection was developed and applied.
Two classes showed a signal suitable for a fit algorithm based on the spectral shape of the measurements.
The spectral fitting algorithm uses the expectations as found by simulation and reconstruction to estimate the different contribution of each component for a measurement.
This is done simultaneously for the classes chosen.
A method to calculate confidence intervals in this multidimensional space based on Feldman and Cousin was developed.
Finally, the sensitivity is estimated by a large number of pseudo experiments.
As the uncertainties of the atmospheric neutrino flux and the spectral index have a significant influence, they were fitted simultaneously with the flavor composition.
In this way, the influence is reduced.
Several hypotheses on the flavor composition were tested.
The overall shape of exclusion limits is very similar for all of them.
For an assumed equipartition composition, 60% of the flavor composition space can be excluded on a 2 sigma-level after a lifetime of 10 years of ARCA.
Compared with IceCube's predictions, this value is a slight improvement.
However, in this work, self-veto effects of neutrinos are not included, which is supposed to increase the sensitivity.
The confidence region has an elongated shape as well as the range of favored models in flavor composition space.
Furthermore, the large half-axes are perpendicular to each other, which makes it possible to reject them, even if the confidence interval is quite large relative to the range of possible models.
The search for dark matter forms one of the key scientific goals in contemporary particle and astroparticle physics. The distribution of dark matter in the Milky Way is thought to be radially symmetric and to peak at the Galactic Centre, which makes the latter a very interesting target for the search for dark matter. Models predict the annihilation or decay of dark matter into standard model particles, among them gamma rays with potentially very high energies (VHE, E ≥ 100 GeV). In this energy regime the High Energy Stereoscopic System (H.E.S.S.) operates, which is an array of four imaging atmospheric Cherenkov telescopes (IACT) situated in the Khomas highland in Namibia. H.E.S.S. detects gamma rays with energies between about 100GeV and up to several tens of TeV. Furthermore it has observed the Galactic Centre for over 100 hours. In this thesis an on-off-analysis method based on the Model++ reconstruction, a new and very sensitive reconstruction method, is developed. Detailed systematic studies are presented and used to quantify the uncertainty of the developed analysis method. The on-off-analysis method is then applied to the data taken on the Galactic Centre region and upper limits on the velocity-averaged dark matter annihilation cross-section are derived.
The muons of the air shower from interactions of pions and kaons with nuclei of the atmosphere can be used to measure the signal of the seasonal anisotropy. In this work the data of the ANTARES detector is analyzed to see this effect.
The setup of the ANTARES detector has changed during time. Periods of time are identified and the basic selection of the data that was used for this work is explained. The efficiency of the detector decreases over time and it is shown how the weights where calculated to compensate this effect.
Other effects concerning the efficiency of the detector are identified and analysed. It was shown that the number of active optical modules and the baseline also have an influence on the reconstructed events and the stability in data taking. It was also shown that for the reconstruction methods a higher baseline results in lower muon rates. Weights are introduced to compensate the influence of the different characteristics and circumstances during data taking and their effect on the stability of the muon rate is shown.
The muon rate will be compared to the relative variation of the effective atmospheric temperature and the results will be discussed.
Active galactic nuclei (AGN) are promising candidates for hadronic acceleration.
The combination of radio, gamma ray and neutrino data should give information
on their properties, especially concerning the sources of the high-energetic cosmic
rays. Assuming a temporal correlation of gamma and neutrino emission in AGN
the background of neutrino telescopes can be reduced using gamma ray lightcurves.
Thereby the sensitivity for discovering cosmic neutrino sources is enhanced. In the
present work a stacked search for a group of AGN with the ANTARES neutrino
telescope in the Mediterranean is presented. The selection of AGN is based on
the source sample of TANAMI, a multiwavelength observation program (radio to
gamma rays) of extragalactic jets southerly of 30 deg declination. In the analysis
lightcurves of the gamma satellite Fermi are used. In an unbinned maximum
likelihood approach the test statistic in the background only case and in the signal
and background case is determined. For the investigated 10% of data of ANTARES
within the measurement time between 01.09.2008 and 30.07.2012 no significant
excess is observed. So on the total flux of the AGN of the stacked search an upper
limit can be set.
The origin of high energetic cosmic rays has been puzzling since their discovery.
Many theories about the sources of these cosmic rays also predict a flux of high
energetic cosmic neutrinos. Recently, the existence of such a high energetic neutrino flux has been confirmed, but the location and nature of its sources remains
unknown. The ANTARES neutrino telescope was built in the Mediterranean
Sea, 40 km off the French coast near Toulon in a depth of 2475 meters to help
answer this and other questions. It consists of a three dimensional array of 885
photomultiplier tubes that detect the Cherenkov light emitted by secondary
particles, which are produced in interactions between neutrinos and nuclei in
the water.
The identification and reconstruction of the observed neutrino events constitute challenging tasks. Parts of this thesis deal with algorithmic approaches
to improve these tasks using pattern recognition. The first application is the
suppression of undesired background by a classification algorithm. The second
approach is the selection of the best available direction reconstruction for each
neutrino.
The main focus of this thesis lies on a new method to evaluate the spatial
distribution of the observed neutrinos. While most approaches test one specific
hypothesis for a specific source, derived from theory or other measurements,
this search refrains from optimizing for individual source hypotheses and tries
to detect the most pronounced density fluctuation in the spatial distribution,
regardless of its specific position, size, shape or internal distribution as unbiased
as possible. To achieve this, the statistical likelihood for the observed neutrino
density is evaluated in multiple scales up to distances between events of 180°. To
recognize a potential cosmic neutrino signal, regions with the most pronounced
deviations are identified and compared to the expectations from a random background hypothesis. The strength of such a flexible, model-independent search
is not the sensitivity for a specific source hypothesis, but instead to detect also
unexpected hypotheses that can then be analyzed in more detail.
In the data recorded from 2007 to 2012 this search found a very large structure close to the direction of the center of our galaxy with a post-trial significance of 2.52σ. It can therefore be explained best by a statistical fluctuation.
As a simple crosscheck this method has been applied to a publicly available
data sample recorded independently by the neutrino telescope IceCube. This
evaluation also resulted in an overfluctuation at the location where the most
significant structure from ANTARES data overlaps with the field of view of
IceCube. With the devised analysis method the found structure in the IC40
data has a significance of 2.14σ.
While this is intriguing, ultimately, a dedicated follow-up analysis that is
optimized for the derived hypothesis is necessary to find unambiguous evidence
for its true nature.
Since, despite further studies, no unambiguous explanation could be found
for the obtained results, a follow-up analysis is recommended, that can be
adapted specifically to the results and therefore has a higher chance to provide
unambiguous insights.
Nevertheless this result constitutes the most significant spatially resolved
hypothesis for the sources of high energetic astrophysical neutrinos so far.
Talbot-Lau X-ray imaging provides additional information about the inner structure of materials by using X-ray gratings. It extends conventional X-ray imaging by two additional images, the differential phase-contrast and the dark-field image.
This work focuses on the development and optimisation of this interferometric method with regard to a standardised use in various fields of application from medicine and non-destructive testing to the application in laboratory astrophysics. Examples of applications are presented for the field of defect detection in carbon fibre reinforced plastics and the investigation of archaeological findings. The technical requirements of the extended X-ray method in the field of lung imaging are presented by comparing the properties of different grating interferometers.
Another aspect considered is the easy handling of such an X-ray system. For this purpose an alignment method was developed, which is based on a look-up table called moiré map. For experiments in laboratory astrophysics at an X-ray backlighter, this alignment method is mandatory.
The implementation of a Talbot-Lau scanner enlarges the field of view which is typically limited by the gratings. Finally, this work presents a novel continuous phase-sampling scanning method together with an optimisation of the established reconstruction method. This method allows the extension of a Talbot-Lau scanner to a Talbot-Lau helical computed tomography scanner.
In summary, the individual optimisations form an important basis for the transfer of interferometric X-ray imaging from research to clinics and industry.
Sensitivity studies on tau neutrino appearance withKM3NeT/ORCA using Deep Learning Techniques
(2020)
In the last few decades, it has been experimentally verified that neutrinos can change their flavor
while propagating through space by so-called neutrino oscillations. The oscillation probabilities
of neutrinos to oscillate from one flavor into another flavor are described by the neutrino mixing
matrix.
An important open question in particle physics is if the neutrino mixing matrix is unitary or not.
Currently, the uncertainties on several matrix elements are too large in order to draw significant
conclusions on the unitarity of the matrix. This is mostly due to the low experimental statistics
in the tau neutrino sector. KM3NeT/ORCA is a water Cherenkov neutrino detector under
construction in the Mediterranean Sea with several megatons of instrumented volume. Its main
objective is the determination of the neutrino mass ordering. However, it will also observe about
4000 tau neutrino events per year, which will significantly improve the available tau neutrino
statistics. In KM3NeT/ORCA, tau neutrinos will be identified by observing a statistical excess
of charged-current-induced, cascade-like events with respect to the atmospheric electron neutrino
expectation.
The reconstruction of the low-level detector data consists of several stages. First, the detector
background consisting of atmospheric muons and randomly correlated noise by 40K decays in
seawater and by bioluminescence needs to be distinguished from the expected neutrino signals
by using a classification algorithm. After this, track-like (charged-current muon neutrino) events are distinguished from
cascade-like (charged-current electron neutrino, neutral-current electron neutrino) events based on another classifier. At last, neutrino properties like
the energy and the direction of the neutrinos need to be reconstructed. Until now, maximum
likelihood-based reconstruction algorithms accompanied by shallow machine learning techniques
like Random Forests have been employed in the standard KM3NeT/ORCA reconstruction pipeline
to tackle all of these tasks.
A novel technique for the reconstruction of the detector data is to employ deep artificial neural
networks. In this approach, simulated, low-level experimental data is used for the training of a
deep neural network. During the training process, the network builds a representation of the
typical event properties that then allows for the reconstruction of the events. Within the scope
of this thesis, deep convolutional neural networks (CNNs) have been designed for each of the
aforementioned tasks, leading to a complete, CNN-based reconstruction chain. It is shown that
this first application of a CNN-based event reconstruction to large-volume neutrino telescope
data of KM3NeT/ORCA yields competitive results and performance improvements with respect
to classical approaches.
For the background classification, the CNN-based method leads to a significant improvement
in the ability to distinguish atmospheric muon events from neutrino events. Additionally, a
significant gain in performance for the track-shower classification can be observed. And for
the reconstruction of the neutrino properties, an improvement in the energy reconstruction of
track-like events is made.
Applying the CNN-based reconstruction to an analysis on the sensitivity of KM3NeT/ORCA to
the appearance of tau neutrinos shows that the sensitivity can be improved by more than 10%
with respect to the currently employed reconstruction techniques.
Proving the Majorana nature of neutrinos would establish physics beyond the Standard Model of particle physics, by demonstrating that neutrinos are their own antiparticles. To date, the best candidate for this proof is the observation of the neutrinoless double beta decay. The EXO-200 experiment searches for the neutrinoless double beta decay in 136Xe with an ultra-low background time projection chamber filled with liquid xenon. The current generation of experiments are sensitive to half-lives of this extremely rare decay of up to ∼1026 yr. The main challenge for any experiment that searches for the neutrinoless double beta decay is therefore to reduce background. Primarily, background reduction is achieved during measurement by evaluating the kinetic energy of the decay products but also by applying particle identification techniques.
In this thesis, deep learning based methods are adapted for data analysis in EXO-200 from approaches used in image recognition. These algorithms are developed in order to improve the sensitivity to the half-life of the neutrinoless double beta decay. A deep neural network is trained to reconstruct the kinetic energy deposited in the detector. In particular, this algorithm outperforms the traditional EXO-200 reconstruction in terms of energy resolution by 10 % (12 %) in Phase-I (Phase-II) of EXO-200 operation at the decay energy of the neutrinoless double beta decay. In an additional study, deep neural networks are developed to discriminate double beta decays from the dominant background interactions. The discrimination power of these algorithms exceeds those of other discriminators which utilize classical machine learning methods. In order to confirm a robust performance, the deep neural networks of both studies are validated on Monte Carlo simulated data and on measured data.
The deep learning based discriminator developed in this thesis contributes significantly to the most recent search for neutrinoless double beta decay of the EXO-200 experiment published in Phys. Rev. Lett. 123 (161802). This analysis outperforms other potential analysis configurations and provides the most stringent median half-life sensitivity of 5.0 · 1025 yr at the 90 % confidence level. The half-life sensitivity is further increased by utilizing the energy reconstructed by the deep neural network. This represents the best analysis configuration and results in an improvement in sensitivity by ∼35 % compared to the baseline analysis. These improvements highlight the value of deep learning based methods in complex data analyses for current and future experiments. Additional improvements represent a promising path toward a potential observation of the neutrinoless double beta decay.
The IceCube neutrino observatory is the largest operating neutrino telescope at the moment.
It consists of 5160 Digital Optical Modules (DOMs) on 86 vertical strings buried in a
depth of 1.5km to 2.5km within the Antartic ice instrumenting a volume of approximately
1 cubic km . An upgrade of the in-ice array to a volume of almost 10 cubic km, called IceCube Gen2 high-
energy array, is the subject of current research. The multi photomultiplier (PMT) Digital Optical
Module (mDOM), which consists of 24 symmetrically distributed 3-inch PMTs, is considered
as detection unit for the Gen2 high-energy array. Alternatively, an upgraded version of the
IceCube DOM, called PINGU Digital Optical Module (PDOM), containing only one 10-inch PMT
facing downwards, is also considered as detection unit. This work analyzes the effect of the
sensor segmentation of the mDOM on the angular resolution of through-going muon tracks in
comparison to the angular resolution obtained with the PDOM within the context of a Gen2
high-energy array geometry.
In order to eliminate the effect of different photon detection efficiencies of the two sensor designs,
the quantum efficiencies of the respective PMTs are scaled in the simulation to ensure an
equalized effective photocathode area per module. For down-going and horizontal through-going
muons with an energy between 3TeV and 70PeV a detector equipped with mDOMs yields between
10% and 40% better angular resolution in almost all energy regimes after sensor-independent
quality cuts (based on Monte Carlo information) have been applied.
For up-going muons with energies below 1PeV the upscaled PDOM yields between 7% and 13% lower angular errors.
Finally, estimations of the 90% exclusion limits and the 5σ discovery fluxes of neutrino point
sources are conducted for both sensors in a Gen2 high-energy array.
For sources with a declination below 5° the upper limits and discovery fluxes obtained with the mDOM are 8 − 11% lower.
The upscaled PDOM leads to 4 − 12% lower exclusion limits and discovery fluxes for sources with a declination above 33°.
In this work, I investigated how a presumable neutrino signal associated with gamma-ray bursts can be identified using data from the Antares neutrino telescope. Gamma-ray bursts (GRBs) are cataclysmic events most likely connected to the collapse of an extremely massive star or a binary system into a black hole. In the order of seconds, they emit high-energy gamma rays that can outshine the rest of the universe, making them the most powerful processes known. The observations are commonly explained by highly relativistic outflows of material pointed towards Earth, in which electrons are accelerated and give rise to a photon signal due to synchrotron and inverse Compton processes. If in addition to the leptonic matter, protons are also present in the ejecta, they would be similarly accelerated to energies up to 100 EeV. Their interactions with the present photon field would inevitably yield the simultaneous emission of neutrinos of PeV energies. This flux was first predicted in 1997 by Waxman and Bahcall, but despite numerous experiments, no conclusive evidence for neutrino signals from GRBs has yet been found.
The compelling evidence of a high-energy cosmic neutrino signal correlated with astrophysical sources would, for the first time, prove the acceleration of hadrons beyond any doubt, a hypothesis that cannot be tested by pure electromagnetic observation. However, to explain the origin of cosmic rays at ultra-high energies, it is absolutely crucial to identify those processes in the universe that are capable of accelerating baryons to such energies.
As a first step, I investigated whether or not the reconstruction of particle trajectories in the Antares data can be improved by identifying parameter configurations that give rise to systematic deviations in the directional reconstruction. If such effects can be detected and quantified, this knowledge can be used to correct the reconstruction on an event-by-event basis and narrow down the most probable source of emission. Having scanned the instrumented volume with simulated muons, I showed that overall shifts in the reconstructed directions by more than the detector’s resolution occur in 1% of the sample. Hence, with a non-negligible chance the reconstruction of individual data events can be refined and thereby reinforce a gamma-ray burst as the source of emission of a presumable neutrino signal.
Several techniques to single out a neutrino signal from GRBs in the Antares data were developed, both in the search for simultaneous as well as a possibly time-shifted neutrino emission with respect to the photon signal. I made use of data from multiple spacecraft and Earth-bound telescopes within the Gamma-ray burst Coordinates Network such as the Swift and Fermi satellites to search for correlated neutrinos in the data from the Antares telescope.
Starting from a simple counting method demonstrated on the showcase burst GRB091026, I showed how the use of an un-binned likelihood can improve the detection prospects by up to a factor of two. The developed technique was optimized in terms of maximal detection power to search for coincident neutrinos with gamma-ray bursts occurring between December 2007 and 2011, after the completion of the Antares detector. The presented work has been the first of this kind being optimized for a second-generation neutrino-emission model. The early and more optimistic models had previously been excluded by the non-observation of any neutrino signal from GRBs with IceCube. However, the recent models predict considerably less neutrino events from these sources, so that the present limits do
not constrain the hadronic acceleration in the internal shock scenario of gamma-ray bursts yet. Similar methods were employed during a search for a neutrino signal from a particularly bright gamma-ray burst in 2013, GRB130427A, in the Antares data. Unfortunately, none of these analyses could identify any data events simultaneously with the selected bursts. Hence, only upper limits on the neutrino flux could be derived which lie a factor of 38 above the model predictions. These are compatible with previous limits set by other experiments, but complementary in sky coverage, energy range or data livetime.
Having studied the capability of the Antares detector to identify neutrino signals from gamma-ray bursts, I made use of this knowledge to infer the detection potential of the future KM3NeT experiment to distinguish a similar signal over background. I demonstrated that the planned detector will be capable of probing the prevailing models and the parameter space upon which they are based with unprecedented precision, allowing to either detect or severely constrain the fireball paradigm in the next decades.
Moreover, I investigated the capabilities to detect a neutrino signal associated with gamma-ray-burst alerts if it was shifted in time with respect to the electromagnetic signal. Numerous models predict, for instance, the delayed emission of neutrinos at the sources up to one day, while others derive different arrival times for neutrinos and photons due to symmetry breaking of Lorentz invariance. Thanks to their cosmic distances and transient nature, gamma-ray bursts provide unique test environments to study and verify such effects. A completely novel technique was developed to distinguish such a signal from the expected background, which allows even faint signals to be detected as a cumulative effect in a large sample of GRBs. Such an approach is completely unprecedented in its capabilities to identify neutrino signals which might be delayed with respect to the photon detections by up to 40 days, while at the same time imposing as few assumptions on any model as possible. Mimicking a test signal in only a fraction of gamma-ray bursts, I showed that the method robustly detects an associated neutrino flux if only around 1% of all GRBs gave rise to an associated signal event in the Antares data. Six years of data from the Antares telescope were examined in the search for a GRB-associated neutrino excess, yet not a single potential astrophysical neutrino candidates could be found in the defined search windows. Since more than four spatial coincidences would have been expected from mere randomized data, this represents a substantial under-fluctuation with respect to the expectations from pure background. In addition, one year of public IceCube data were scanned for such an excess. Slightly more neutrino candidates coincided spatially with the gamma-ray bursts than derived from background only. Yet, the excess is still compatible with mere background with a five per cent probability and consequently not significant.
This work can therefore only confirm previous analyses that could not identify any significant neutrino excess associated with gamma-ray bursts. Nevertheless, novel techniques are presented that can considerably enhance the detection potential for such signals and will certainly allow the operating experiments and, in particular, the planned KM3NeT detector to put the prevailing models of hadronic acceleration in gamma-ray bursts to the proof in the near future. For the first time, neutrino emission from gamma-ray bursts has been searched for in the data from the current neutrino telescopes considering a time delay of up to forty days.
The ANTARES neutrino telescope was built at a depth of ∼ 2500 m in the Mediterranean Sea near Toulon in the south of France. ANTARES searches for neutrinos originating from annihilating dark matter particles in the Sun. In order to identify those neutrinos, a high angular resolution for the expected energies (less than 1 TeV) is necessary. With ANTARES, a high angular resolution can only be achieved with a sophisticated position calibration system of the detection units. The existing positioning calibration algorithm (alignment) was improved, such that more data with an accurate information about the detection units’ position is available for subsequent processing. The improved algorithm lead to a significant increase of available alignment information compared to the previous algorithm. Further, an error calculation on the detection units’ position and orientation was implemented, in order to improve the estimations of uncertainties for a given neutrino event.
In the second part of this work non-universal supersymmetric extensions of the standard model of particle physics were investigated. The well studied constrained Minimal Supersymmetric Standard Model (cMSSM) was used as a reference model. By introducing a non-universal Higgs (NUHM) or gauge sector (NUGM), more general models were constructed and their phenomenological implications were discussed. Recent experimental results from direct as well as indirect dark matter search experiments were applied to the model’s parameter space. Further, the discovery of a Higgs boson at the LHC last year was taken into account. A chi-square analysis was performed to test the compatibility between model predictions and experimental measurements. Included observables were the neutralino relic density Ωh2, the Higgs-boson mass mh and several flavor observables, such as BR(B → Xsγ). It was found, that the least agreement between observations and predictions is given in the cMSSM. A better agreement is given in the NUHM and NUGM scenario. Both scenarios were compatible with each other. Finally, favored regions for dark matter search observables were derived. These regions are not yet excluded experimentally, but will be tested in the future.
The neutrino mass hierarchy can be determined by measuring the energy- and zenith-angle-dependent oscillation pattern of few-GeV atmospheric neutrinos that have traversed the Earth. This measurement is the main science goal of KM3NeT/ORCA (`Oscillation Research with Cosmics in the Abyss'), a planned multi-megaton underwater Cherenkov detector in the Mediterranean Sea. A key task is the reconstruction of shower-like events induced by electron neutrinos in charged-current interactions, which substantially affect the neutrino mass hierarchy sensitivity.
In this thesis, numerous aspects of the expected neutrino detection performance of the planned ORCA detector are investigated. A new reconstruction algorithm for neutrino-induced shower-like events is developed. The achieved resolutions are close to the reconstruction accuracy limits imposed by intrinsic fluctuations in the Cherenkov light signatures. These intrinsic resolution limits are derived as part of this thesis. Differences in event reconstruction capabilities between water- and ice-based Cherenkov detectors are discussed. The configuration of existing trigger algorithms is optimised for the ORCA detector. Based on the developed shower reconstruction, a detector optimisation study of the photosensor density is performed. In addition, it is shown that optical background noise in the deep Mediterranean Sea is not expected to compromise the feasibility of the neutrino mass hierarchy measurement with ORCA.
Together, these investigations contribute significantly to the estimated neutrino mass hierarchy sensitivity of ORCA published in the 'Letter of Intent' for KM3NeT, illustrate why a new optimised detector geometry is proposed, and give pointers as to how to improve the neutrino detection performance and consequently the neutrino mass hierarchy sensitivity of ORCA.
Silicon Photomultipliers are pixelated semiconductor photosensors with single photon resolution and are regarded as very promising technology to construct next-generation, cutting-edge detectors for low-background experiments in astroparticle physics. The energy resolution for any particle interaction that occur in the detector is a crucial parameter of such experiments and depends on the capability of the detector to collect as much scintillation light from the events as possible. The detector needs to be designed thoroughly to maximise the light yield and the optical behaviour of the Silicon Photomultipliers in the detector medium plays a key part in detector simulations.
This work presents extensive reflectivity studies with Silicon Photomultipliers and other samples in liquid xenon at vacuum ultraviolet wavelengths. A dedicated setup at the University of Münster has been used that allows to acquire angular reflection spectra of various samples immersed in liquid xenon with 0.45° resolution. The reflectivity is determined to 25–36% at an angle of incidence of 20° for the four samples investigated in this work. The reflectivity increases with angle of incidence for all samples but one. The highest reflectivity was measured for a wafer sample with 70% at 76°.
This work also reports on the determination of basic Silicon Photomultiplier properties via diode IV-characteristics investigated at different temperatures. Such characteristics can also be used to establish a method to use Silicon Photomultipliers as direct sensors for the very own pn-junction temperature.
Neutrino telescopes are used to search for the sources of astrophysical neutrinos and to study
the fundamental properties of neutrinos. They come in three flavours — electron, muon and tau
neutrinos. While muon neutrinos produce distinct elongated ‘track-like’ light signatures in the
telescopes, electron and tau neutrinos appear predominantly ‘shower-like’.
At TeV to PeV energies relevant for astrophysical source searches, neutrinos of both signatures
can be selected out of the dominant backgrounds based on a few discriminating variables. Such
a so-called ‘cut and count’ analysis for track- and shower-like signatures has been performed using eight years of data recorded with
the ANTARES telescope and an upper limit on the neutrino flux from the ‘Fermi Bubbles’, an
extended region of gamma-ray emission in the central part of our Galaxy, has been set.
The KM3NeT/ORCA detector is under construction and optimised to study neutrino properties in
the few-GeV energy range. Since differences between the event signatures at these energies
are more subtle, classification models using ‘Random Decision Forests’ have been employed to
identify neutrino candidates and to separate tracks from showers. With the resulting models, the
oscillation from (mainly) atmospheric muon neutrinos into the tau flavour can be measured on a
statistical basis. The sensitivity of the detector to measure this tau-neutrino appearance has
been studied. It is found that after one year of operation, KM3NeT/ORCA will determine the
strength of the tau- neutrino flux to within ±30% at 3σ CL. This measurement precision allows to
constrain the currently large theoretical uncertainties on the tau-neutrino cross sections and to
test whether the generally accepted picture of flavour oscillation amongst the three known active
neutrino flavours is complete.
Dosepix ist ein pixelierter, hybrider Halbleiterphotonendetektor, der für Messungen in der
Dosimetrie entwickelt wurde. Der 300 μm dicke Siliziumsensor ist in 16 × 16 Pixel mit zwei
verschiedenen Größen unterteilt. Jeder Pixel kann einzelne Photonen und die entsprechende
deponierte Energie individuell registrieren. Der Dosepix-Detektor kann in verschiedenen
Aufnahmemodi betrieben werden. Zwei davon werden in dieser Arbeit für Dosismessungen
benutzt. Ein Dosimeter, das aus drei Dosepix-Detektoren mit unterschiedlichen Metallfiltern
besteht, wird in den Experimenten verwendet. Die Filter verändern das auftreffende
Photonenspektrum, das dann von den Detektoren gemessen wird. Dies erhöht die Energieinformation
über das ursprüngliche Spektrum. Dadurch ist es mit dem Dosimeter
möglich, Dosismessungen in Photonenfeldern mit Energien bis zu 1.3MeV durchzuführen.
Testmessungen werden durchgeführt, um den Detektor hinsichtlich verschiedener Kalibrierungsmethoden
und einstellbarer Messparameter zu untersuchen. Im spektroskopisch
aufgelösten Photonenzählmodus erfüllt das Dosepix-Dosimeter die gesetzlichen Anforderungen
für neue Personendosimeter. Es werden Größen wie die Energie des applizierten Röntgenspektrums
oder die applizierte Dosisleistung getestet. Die Personendosen 𝐻p(10) und
𝐻p(0, 07) werden bezüglich ihrer Energieabhängigkeit untersucht. Die maximal applizierte
Dosisleistung mit einer Response, die noch innerhalb der geforderten Grenzen liegt, ist
mehr als 100-mal höher als der vorgeschriebene Wert. Das Verhalten des Dosepix-Detektors
ändert sich mit der applizierten Dosisleistung aufgrund von Pile-up in der Pixelelektronik.
Welche Auswirkungen Pile-up auf eine Dosismessung haben kann, hängt von der Energieverteilung
des eintreffenden Spektrums ab. Dieser Zusammenhang wird für mehrere ungefilterte
Röntgenspektren untersucht.
Das Verhalten des Dosepix-Detektors ändert sich deutlich in Photonenfeldern mit kurzen
Pulsen und hohen Dosisleistungen, wie sie von tragbaren Röntgengeneratoren erzeugt werden.
Messungen zur Charakterisierung werden in solchen gepulsten Photonenfeldern mit
unterschiedlichen Messparametern durchgeführt, um das Verhalten des Detektors zu untersuchen.
Die Dosis unter diesen extremen Bedingungen mit dem Photonenzählmodus zu
messen, schlägt fehl. Für diesen Fall wird eine andere Methode präsentiert. Eine Simulation
der Datenerfassung in Photonenfeldern mit hohen Dosisleistungen mit einem integrierenden
Aufnahmemodus des Dosepix-Detektors wird vorgestellt. Eine Deep-Learning-Analyse, die
Trainingsdaten aus dieser Simulation verwendet, wird benutzt, um die Dosisgrößen 𝐻p(10)
und 𝐻p(0, 07) zu bestimmen. Verschiedene Netzwerkarchitekturen werden hinsichtlich ihrer
Leistungsfähigkeit auf simulierten Daten untersucht. Es werden Messungen mit zwei tragbaren
Röntgengeneratoren, die unterschiedliche Photonenspektren erzeugen, durchgeführt.
Für beide Geräte liefert das ausgewählte neuronale Netz über einen weiten Dosisleistungsbereich
bis etwa 2.4・10^6 Sv/h stabile Ergebnisse. Die Deep-Learning-Analyse zur Rekonstruktion
der Personendosis 𝐻p(10) wird in kurzen Röntgenpulsen, die durch lasergetriebene
Emission erzeugt werden, getestet. Dabei tritt ein erheblicher Beitrag zur Dosis von Elektronen
auf. Die Dosis, die mit dem Dosepix-Dosimeter gemessen wird, stimmt mit der
Dosis, die separat mit einem Spektrometer gemessen wird, gut überein. Es kann nicht
ausgeschlossen werden, dass die gute Übereinstimmung zufällig zustande kommt, da der
Dosepix-Detektor bis jetzt noch nicht in gemischten Elektronen-/Photonenfeldern getestet
wurde.
Die Analyse mit dem neuronalen Netzwerk zur Abschätzung der Dosis wird in kontinuierlichen
Strahlungsfeldern, wie sie gesetzlich vorgeschrieben sind, getestet und zeigt gute
Ergebnisse. Mit der Deep-Learning-Analyse werden fast alle getesteten Anforderungen
erfüllt. Dies ist unerwartet, da die Simulation, die Trainingsdaten für das neuronale Netzwerk
generiert, nicht für kontinuierliche Strahlung ausgelegt ist.
Weiterhin wird die Deep-Learning-Analyse, die mit einem integrierenden Aufnahmemodus
durchgeführt wird, mit unterschiedlichen Bestrahlungszeiten getestet, um die applizierte
Dosis zu variieren. Die resultierende Response ist über einen weiten Dosisbereich bis zu
82 mSv stabil, was zeigt, dass die Leistung des neuronalen Netzwerks nicht von der Bestrahlungszeit
abhängt.
Die in dieser Arbeit durchgeführten Experimente und Analysen zeigen, dass Pile-up, wie
er in gepulsten Photonenfeldern mit hohen Dosisleistungen auftritt, einen signifikanten
Einfluss auf Dosismessungen hat. Es wird gezeigt, dass ein Dosimeter, das aus Dosepix-
Detektoren besteht, dazu geeignet ist, um unter diesen extremen Bedingungen die Personendosis
zu messen. Weitere Forschungen könnten es ermöglichen, den Anwendungsbereich
des Dosepix-Dosimeters über die Photonendosimetrie hinaus zu erweitern und in anderen
Wissenschaftsbereichen einzusetzen.