Externe Publikationen
Refine
Year of publication
- 2016 (61) (remove)
Document Type
- Article (22)
- conference proceeding (article) (18)
- Part of a Book (8)
- conference talk (4)
- Book (3)
- conference proceeding (presentation, abstract) (3)
- Working Paper (2)
- Doctoral Thesis (1)
Has Fulltext
- no (61)
Is part of the Bibliography
- no (61)
Keywords
- PA12 (2)
- Politische Bildung (2)
- Zeit (2)
- additive manufacturing (2)
- laser beam melting (2)
- vibration-controlled powder nozzle (2)
- 0–1 Entscheidungen (1)
- 3D printing (1)
- Acceleration (1)
- Additive manufacturing (1)
Institute
- Fakultät Sozial- und Gesundheitswissenschaften (29)
- Fakultät Informatik und Mathematik (14)
- Fakultät Maschinenbau (12)
- Labor Additive and Intelligent Manufacturing for Sustainability (AIMS) (7)
- Labor Logopädie (LP) (6)
- Regensburg Center of Health Sciences and Technology - RCHST (6)
- Fakultät Business and Management (3)
- Fakultät Bauingenieurwesen (2)
- Labor für Baustoffe (2)
- Fakultät Angewandte Natur- und Kulturwissenschaften (1)
Begutachtungsstatus
- peer-reviewed (17)
- begutachtet (3)
Customer project selection is a challenge for many industrial companies. An inappropriate project selection approach can lead to constraint violations, high fixed costs, and suboptimal portfolios. To overcome these problems a cash-flow-based linear optimization model was developed in partnership with a tier-1 automotive supplier. Implementation barriers had been verified through a case study conducted at two organizational hierarchies. Results suggest that an application at the operating levels is possible. At higher levels, though, product and firm complexity require major implementation efforts. This article serves theorists as well as practitioners in multiple regards. First, an overview of existing project selection methods and their application in practice is provided. Additionally, the supplier's current appraisal process is depicted. Second, operations research implementation barriers are identified and validated for the adoption of the proposed mathematical project selection approach. Third, a guideline including procedures to overcome experienced difficulties is presented.
Unternehmen sind häufig mit Situationen konfrontiert, in denen schnell Entscheidungen bezüglich der Auswahl mehrerer Handlungsalternativen gefunden werden müssen. Mathematische Verfahren können hierbei unterstützen, z. B. für die Ermittlung einer ersten Diskussionsbasis. Verfügbare Softwarelösungen errechnen zwar häufig optimale Ergebnisse, zeigen jedoch Schwächen bei der praktischen Anwendbarkeit. So ist eine Einarbeitung in komplexe Optimierungssoftware für die teilweise sporadisch auftretenden Probleme in der Regel für Unternehmen nicht möglich, unter anderem auch unter Anbetracht der teilweise hohen Kosten der Standardsoftware und dem benötigten hohen Einarbeitungsaufwand. Gerade Problemstellungen in Fachbereichen, die nicht auf mathematische Problemlösung spezialisiert sind, münden daher regelmäßig in Ineffizienzen. Basierend auf den in der Literatur diskutierten Lösungsansätzen wurde ein praxisorientierter Ansatz zur Entscheidungsunterstützung für Rucksackprobleme bzw. 0–1 Probleme mithilfe von Genetischen Algorithmen (GA) entwickelt und technisch in Microsoft Excel® umgesetzt. Ein Praxistest bei einem chinesischen Textilunternehmen belegt erhöhte Effektivität und Effizienz. Die Software kann kostenfrei nach MIT Lizenz unter http://www.solvega.de/ heruntergeladen werden.
Politische Bildung und Zeit
(2016)
Michael Görtler richtet in dieser Untersuchung den Fokus auf die Bedeutung von Zeit für die politische Bildung, da die Zeitlichkeit politischer Bildungsprozesse und demokratischer Politik in der Didaktik der politischen Bildung bisher kaum beachtet wurde. Der Autor verkleinert diese Lücke, erweitert den Fachdiskurs und gibt praktische Impulse für die politische Bildungsarbeit. Im Mittelpunkt steht die Frage nach dem expliziten wie impliziten Sichtbarwerden der Zeit in Ansätzen der Didaktik der politischen Bildung und der Bildungs- und Sozialwissenschaften. Dabei wird auch geprüft, welche didaktischen Herausforderungen in Bezug auf die Ziele, Inhalte und die Gestaltung von Lehr- und Lernprozessen auf der Subjekt- und Objektseite sowie in der Vermittlung zwischen beiden Seiten beachtet werden müssen.
Der Beitrag fokussiert die Bedeutung von Zeit für die politische Erwachsenenbildung. Angesichts aktueller soziokultureller und sozioökonomischer Herausforderungen - besonders: der Ökonomisierung und Beschleunigung im Kontext von Bildung und Lernen - ergeben sich eine ganze Reihe von drängenden Zeitfragen. Der Bedeutung von Zeit wird in der Erwachsenenbildung und der Politischen Bildung jedoch bisher wenig Aufmerksamkeit geschenkt. Aus diesem Grund wird diese Fragestellung im Beitrag vertieft. Erstens werden die Situation der politischen Erwachsenenbildung und zweitens der Forschungsstand untersucht. Ausgehend von den Befunden werden drittens drei Zugänge herausgearbeitet, welche die Bedeutung von Zeit für die politische Erwachsenenbildung sichtbar machen: Politische Bildung als Prozess und Verzögerung sowie Zeit als Gegenstand, um die Zeitlichkeit von Gesellschaft und Politik zu verstehen und zu beurteilen, um sich an der Gestaltung der Zeitstrukturen zu beteiligen. Der Beitrag endet mit einem Ausblick und offenen Fragen, die sich an die zeittheoretischen Überlegungen anschließen.
The article focusses on the importance of time for the political adult education. Current socio-cultural and socio-economic challenges – in particular: the commodification and acceleration in the context of education and learning – results in a wide range of pressing time issues. However the adult education und the political education pay little attention to the importance of time. For this reason, this issue will be deepened in the article. First, the situation of the political adult education and secondly, the state of research are examined. On the base of the findings, three approaches are pointed out, which emphasize the importance of time for the political adult education: political education as a process and as a deceleration and also as a subject to understand and judge the temporality of politics and society as a necessary condition to participate in shaping the time structures. The paper ends with an outlook and open-ended questions, which adjoin the time theoretical considerations.
Der vorliegende Beitrag stellt schulpädagogische Überlegungen zur Bedeutung von Zeit für gute Schulen an. Anhand von theoretischen und praktischen Ansätzen wird der Frage nachgegangen, welche Rolle Zeit beim Lehren und Lernen spielt und welche Konsequenzen sich daraus aus der Perspektive der Schulpädagogik ziehen lassen.
Dieser Beitrag ist als Theorie-Praxis-Reflexion ausgelegt, die auf einem Vortrag mit anschließender Gruppendiskussion mit Bildungspraktikern der katholischen Erwachsenenbildung basiert. Im Mittelpunkt steht die Frage, welche Aufgaben sich für die politische Bildungsarbeit mit Erwachsenen in der momentanen Situation ergeben.
Seit einigen Jahren kann beobachtet werden, dass Unternehmen zunehmend marginale Innovationen in Patente umwandeln. Alle wichtigen Patentämter weltweit sind mit einer regelrechten "Explosion von Patentanmeldungen" konfrontiert (Harhoff et al., 2007). Die Rolle von geistigem Eigentum - englisch: Intellectual Property (IP) - vor allem von Patentmanagement hat sich von einem rein rechtlichen Konstrukt, hin zu einem Bereich von strategischer Bedeutung entwickelt. Obwohl Fragestellungen in Hinblick auf strategisches Patentmanagement zuletzt Gegenstand zunehmender Forschung und Praxis geworden sind, mangelt es bislang an dedizierter Forschung zu IP Outsourcing und deren Einfluss auf das Management von Patenten. Aus diesem Grunde, untersucht meine Dissertation empirisch folgende wichtige Fragestellungen: Wie Firmen Outsourcingstrategien definieren, welche Typen von IP Intermediären im Umfeld von IP existieren und wie diese IP Intermediäre die IP Strategien von Unternehmen beeinflussen. Meine Dissertation besteht aus einem Einleitungskapitel und drei individuellen Forschungsartikeln. Das Einleitungskapitel erläutert die Motivation der Arbeit und gibt einen Überblick über den aktuellen Stand der Forschung. Desweiteren untersuchen die nachfolgenden drei Artikel spezifische Forschungsfragen zum Thema IP Outsourcing und dem Management von IP unter Zusammenarbeit mit IP Intermediären. Der erste Artikel betrachtet das Phänomen IP Outsourcing näher und analysiert Unternehmensstrategien in Bezug auf IP Outsourcing. Der zweite Artikel fokussiert sich auf IP Intermediäre, indem unterschiedliche IP Dienstleister vorgestellt werden und sowohl Motive, als auch Determinanten für die Auswahl von IP Dienstleister untersucht werden. Der letzte Artikel befasst sich mit der Frage, wie externe Patentanwälte und deren Erfahrung auf die Patentanmeldestrategie von Unternehmen Einfluss nehmen. Meine Dissertation leistet durch Beantwortung wichtiger
Die Ökonomisierung der Sozialen Arbeit ist ein vieldiskutiertes und kontroverses Th ema. Das Spektrum der Diskussion reicht dabei von der affi rmativen Aufnahme der Th ematik mit Beiträgen zu dem relativ neuen Th ema des Sozialmanagements bis zur detaillierten Beschreibung einer negativen sozial-politischen Tendenz, für deren Charakteristik der Begriff der Ökonomisierung überwiegend pejorativ gebraucht wird. Hier werden häufi g Vorgänge problematisiert, indem ökonomische Rücksichten als imperative Verdrängungen der Eigenlogik der Sozialen Arbeit interpretiert werden. Die affi rmative Aufnahme hat ihren systematischen Niederschlag in der Formulierung von klassischen Sozialmanagementmodellen wie dem Freiburger Managementmodell (siehe Schwarz 2000) und dem Bielefelder Diakonie-Managementmodell (siehe Lohmann 1997) gefunden. Ihr in die Praxis wirkendes Pendant stellt die expansive Entwicklung von Studiengängen für das Sozialmanagement dar (vgl. Boeßenecker/Markert 2007). Der andere Pol der Diskussion wird exemplarisch durch das „Schwarzbuch Soziale Arbeit“ (Seithe 2011) repräsentiert.
Simultaneous EEG-fMRI provides an increasingly attractive research tool to investigate cognitive processes with high temporal and spatial resolution. However, artifacts in EEG data introduced by the MR scanner still remain a major obstacle. This study, employing commonly used artifact correction steps, shows that head motion, one overlooked major source of artifacts in EEG-fMRI data, can cause plausible EEG effects and EEG–BOLD correlations. Specifically, low-frequency EEG (< 20 Hz) is strongly correlated with in-scanner movement. Accordingly, minor head motion (< 0.2 mm) induces spurious effects in a twofold manner: Small differences in task-correlated motion elicit spurious low-frequency effects, and, as motion concurrently influences fMRI data, EEG–BOLD correlations closely match motion-fMRI correlations. We demonstrate these effects in a memory encoding experiment showing that obtained theta power (~ 3–7 Hz) effects and channel-level theta–BOLD correlations reflect motion in the scanner. These findings highlight an important caveat that needs to be addressed by future EEG-fMRI studies.
The adaptive input design (also called online redesign of experiments) for parameter estimation is very effective for the compensation of uncertainties in nonlinear processes. Moreover, it enables substantial savings in experimental effort and greater reliability in modeling.
We present theoretical details and experimental results from the real-time adaptive optimal input design for parameter estimation. The case study considers separation of three benzoate by reverse phase liquid chromatography. Following a receding horizon scheme, adaptive D-optimal input designs are generated for a precise determination of competitive adsorption isotherm parameters. Moreover, numerical techniques for the regularization of arising ill-posed problems, e.g. due to scarce measurements, lack of prior information about parameters, low sensitivities and parameter correlations are discussed. The estimated parameter values are successfully validated by Frontal Analysis and the benefits of optimal input designs are highlighted when compared to various standard/heuristic input designs in terms of parameter accuracy and precision.
In a recent study at the National Research Council Canada, the sound transmission in cold-formed steel-framed constructions was investigated. The results of direct sound insulation tests of wall and floor assemblies were reported at EURONOISE 2015 and INTERNOISE 2015. This paper focuses on flanking sound transmission in cold-formed steel-framed constructions. A representative full-scale mock-up specimen was constructed in NRC’s 8-room flanking transmission facility. The specimen consists of four loadbearing walls with 152 mm deep steel studs, four non-loadbearing walls with 92mm deep steel studs, and four floor-ceiling assemblies consisting of steel joists and a composite steel deck with gypsum concrete. Measurements were conducted according to the indirect method described in ISO 10848. The individual flanking paths were measured by a sequence of transmission loss measurements in which other transmission paths were suppressed by shielding. For the bare specimen without linings, the sound transmission for horizontally adjacent rooms with continuous subfloors is dominated by the floor-floor flanking paths. Floor coverings or floor toppings are needed to meet the requirements in the National Building Code of Canada. This paper presents details of the measurements, highlights some of the results and discusses implications.
In this work, a method for reducing the number of degrees of freedom in online optimal dynamic experiment design problems for systems described by differential equations is proposed. The online problems are posed such that only the inputs which extend an operation policy resulting from an experiment designed offline are optimized. This is done by formulating them as multiple experiment designs, considering explicitly the information of the experiment designed offline and possible time delays unknown a priori. The performance of the method is shown for the case of the separation of isopropanolol isomers in a Simulated Moving Bed plant.
The rising adoption of NoSQL technology in enterprises causes a heterogeneous landscape of different data stores. Different stores provide distinct advantages and disadvantages, making it necessary for enterprises to facilitate multiple systems for specific purposes. This resulting polyglot persistence is difficult to handle for developers since some data needs to be replicated and aggregated between different and within the same stores. Currently, there are no uniform tools to perform these data transformations since all stores feature different APIs and data models. In this paper, we present the transformation language NotaQL that allows cross-system data transformations. These transformations are output-oriented, meaning that the structure of a transformation script is similar to that of the output. Besides, we provide an aggregation-centric approach, which makes aggregation operations as easy as possible.
The method of loci is one, if not the most, efficient mnemonic encoding strategy. This spatial mnemonic combines the core cognitive processes commonly linked to medial temporal lobe (MTL) activity: spatial and associative memory processes. During such processes, fMRI studies consistently demonstrate MTL activity, while electrophysiological studies have emphasized the important role of theta oscillations (3–8 Hz) in the MTL. However, it is still unknown whether increases or decreases in theta power co-occur with increased BOLD signal in the MTL during memory encoding. To investigate this question, we recorded EEG and fMRI separately, while human participants used the spatial method of loci or the pegword method, a similarly associative but nonspatial mnemonic. The more effective spatial mnemonic induced a pronounced theta power decrease source localized to the left MTL compared with the nonspatial associative mnemonic strategy. This effect was mirrored by BOLD signal increases in the MTL. Successful encoding, irrespective of the strategy used, elicited decreases in left temporal theta power and increases in MTL BOLD activity. This pattern of results suggests a negative relationship between theta power and BOLD signal changes in the MTL during memory encoding and spatial processing. The findings extend the well known negative relation of alpha/beta oscillations and BOLD signals in the cortex to theta oscillations in the MTL.
Das Kommissionieren zählt zu den arbeitsintensivsten Aufgaben in der Logistik. Aus diesem Grund ist es wichtig, möglichst genau die erwarteten Prozesszeiten für diese Aufgabe zu bestimmen. Ein verbreiteter Ansatz für die Bestimmung von Prozesszeiten für manuelle Tätigkeiten beim Kommissionieren ist die Methods-Time-Measurement (MTM). Um eine Vorgabezeit mittels MTM zu bestimmen, müssen Einflussfaktoren spezifiziert werden. Dieser Beitrag zeigt anhand der Greifzeit auf, wie diese Einfluss-faktoren auf Grundlage von Artikelstammdaten bestimmt werden können. Weiterhin wird der Unterschied zwischen einer für jeden Artikel individuell bestimmten Greifzeit und der Greifzeit basierend auf repräsentativen Eigenschaften von Artikelgruppen verglichen. Wesentlich wird dabei die mittels MTM bestimmte Greifzeit durch das Gewicht und die Abmessungen (Sperrigkeit) beeinflusst.
In-plant milk-run systems represent transportation systems, where materials are delivered from a central storage area to several points of use on fixed routes and in short and defined intervals. Milk-run systems generally enable frequent deliveries in low lot sizes with short lead times and low inventories at the points of use. Thus, stable and reliable system operation is crucial to avoid delays and material shortages. In industrial practice, milk-run trains usually share resources, for example, loading areas and technology and use the same tracks, leading to dependencies between routes and possible traffic jams and blockages, which significantly affect cycle times and may lead to instabilities in the system. We present a simulation model to analyse in-plant milk-run systems with a focus on typical traffic situations. We describe its application to a large industrial case study in detail and derive recommendations for designing routes with low risk of delays.
The generation of multi-material components using Laser beam melting (LBM) is a challenge which requires the invention of new coating devices for the preparation of arbitrary powder patterns. One solution is the usage of vibration-controlled nozzles for selective deposition of polymer powders. Powder flow can be initiated by vibration even when using powders with low flowability. In this report, the selective deposition of polymer powder by vibrating nozzles is investigated with respect to their application in LBM machines. Therefore, a steel nozzle attached to a piezo actor is applied, whereas the nozzle itself features internal channels which allow the precise control of the powder temperature using heat transfer oil. The setup is used to study the influence of temperature on the powder mass flow. The results show that, next to the vibration mode, the temperature strongly influences the powder mass flow which is done by affecting the moisture and thus the particle-particle adhesion forces. This shows that a precise control of the powder temperature inside the nozzle is required in order to achieve a constant mass flow and thus a successful application of vibrating nozzles inside LBM machines.
The Spoken Wikipedia project unites volunteer readers of encyclopedic entries. Their recordings make encyclopedic knowledge accessible to persons who are unable to read (out of alexia, visual impairment, or because their sight is currently occupied, e. g. while driving). However, on Wikipedia, recordings are available as raw audio files that can only be consumed linearly, without the possibility for targeted navigation or search. We present a reading application which uses an alignment between the recording, text and article structure and which allows to navigate spoken articles, through a graphical or voice-based user interface (or a combination thereof). We present the results of a usability study in which we compare the two interaction modalities. We find that both types of interaction enable users to navigate articles and to find specific information much more quickly compared to a sequential presentation of the full article. In particular when the VUI is not restricted by speech recognition and understanding issues, this interface is on par with the graphical interface and thus a real option for browsing the Wikipedia without the need for vision or reading.
We present a corpus of time-aligned spoken data of Wikipedia articles as well as the pipeline that allows to generate such corpora for many
languages. There are initiatives to create and sustain spoken Wikipedia versions in many languages and hence the data is freely available,
grows over time, and can be used for automatic corpus creation. Our pipeline automatically downloads and aligns this data. The resulting
German corpus currently totals 293h of audio, of which we align 71h in full sentences and another 86h of sentences with some missing
words. The English corpus consists of 287h, for which we align 27h in full sentence and 157h with some missing words. Results are publically available.
Das ständige Umblättern von Noten ist für Musiker ein wiederkehrendes Problem. Dieses wird häufig durch einen Assistenten des Musikers, dem sogenannten Notenwender, gelöst. Diese Unterstützung haben allerdings viele Musiker nur selten während des Übens. In diesem Artikel stellen wir eine Anwendung für mobile Geräte vor, die auf verschiedene Arten das Umblättern von Klavierpartituren unterstützt. In einer Studie mit professionellen Musikern und Klavierschülern wurden diese Arten gegeneinander abgewogen. Die Ergebnisse zeigen auf, dass computer-unterstütztes Blättern Vorteile gegenüber herkömmlichem Blättern hat.
Accelerated test methods are commonly used in order to predict concrete carbonation in natural concentrations. Here, specimens are carbonated at high CO 2 concentrations at a specified temperature and relative humidity. However, the transfer of laboratory results to field behaviour remains difficult because CO 2 transport is affected by the original moisture content of the specimens and additional moisture formed by the carbonation reaction. Therefore knowledge on moisture transport and content during carbonation is required. Specimens made with Ordinary Portland cement and a water/cement ratio 0.50 were exposed to 0.05, 2 and 10 vol.% CO 2 for 28 days. Single-sided NMR moisture profiles were determined before, during and after carbonation. It is shown that moisture content increases due to carbonation at high CO 2 10 %) in the beginning of the exposure. An increase in capillary pore water in front and behind the carbonation front could be observed even after 28d. During natural carbonation moisture changes are mainly due to the change in porosity produced by the carbonation reactions. It is shown that changes in phase composition and thus porosity dominate the carbonation process in cement-based materials. Therefore, the suitability of high CO 2 concentrations is limited for an accelerated test that reflects field condition. Single-sided 1 H NMR proved to be a valuable tool to investigate moisture transport in concrete non-destructively.
Installation of Probes
(2016)
Simultaneous laser beam melting (SLBM) allows the direct realization of multi-material components consisting of different polymer materials by a single Additive Manufacturing (AM) process. To achieve a high compound strength between different materials by adhesive bonding, a common boundary zone based on diffusion of the macromolecules is necessary and thus, both materials needs to be compatible regarding their specific adhesion compatibility. However, by SLBM also incompatible polymers can be processed to multi-material parts. If two incompatible polymers are processed, a positive locking between the different materials is necessary to achieve a connection between the materials. The positive locking results of a random mixture process of the different powder materials during the powder deposition process by a two chamber recoater system, which leads to the forming of undercuts of one material in the other during the melting and recrystallization. In this paper, thermoplastic elastomer (TPE) and polypropylene (PP) powders, which are incompatible, are processed to multi-material specimens. By qualifying basic material properties, their influence on the process and especially on the forming of undercuts in the boundary zone is analyzed. To also allow the analysis of the influence of both material and process parameters on the resulting part properties, tensile test specimens are built and their tensile strength is determined. Additionally, cross sections of the boundary zone are prepared and analyzed by microscope images.
Simultaneous Laser Beam Melting of polymers (SLBM) allows the generation of multi-material components,consisting of different thermoplastic polymers, within one additive building process. Besides the common advantagesof conventional Laser Beam Melting (LBM), multi-material components built by SLBM can fulfill different productrequirements like different chemical resistances or haptic material properties within a single part. To achieve suchparts, different powder materials are deposited next to each other and preheated a few degrees below their meltingtemperatures by infrared emitters and laser radiation (λ = 10.60 μm), before in the last step the preheated powdersare molten simultaneously by an additional laser source (λ = 1.94 μm). In this paper, different polymer powders likepolypropylene (PP) and polyamide 12 (PA12) are used for the generation of multi-material specimens. By varyingdifferent building parameters according to a specified design of experiments, their influence on the part properties isanalyzed. Important building parameters are the intensity and the irradiation time of the laser beam used for meltingthe preheated powders. Besides using tensile tests to determine the tensile strength and the elongation at break, theaverage part height in dependence of the energy input is analyzed. The overall aim is to specify the correlationbetween different building parameters regarding the energy deposition on the resulting part properties.
The generation of multi-material components by laser beam melting (LBM) is a challenge which requires the invention of new coating devices for preparation of arbitrary powder patterns. One solution is the usage of vibration-controlled nozzles for selective deposition of polymer powders. Powder flow can be initiated by vibration enabling a start-stop function without using any mechanical shutter. In this report, the delivery of polymer powder by vibrating nozzles is investigated with respect to their application in LBM machines. Therefore, a steel nozzle attached to a piezo actor and a weighing cell is used in order to measure the stability and time-dependence of the powder mass flow upon vibration excitation with the usage of different kind of powder formulations. The results show that precompression of the powder inside the nozzle by vibration excitation is essential to realize a reliable start-stop function with reproducible discharge cyles and to prevent a initial flush of powder flow. Moreover, the use of different powder materials showed that mass flow is even possible with powders which are not optimized regarding flowability, but is readily enhanced with a factor of 2 to 3 by admixing Aerosil® fumed silica.
Der deaflympische Sport (Syn.: Gehörlosensport) ist sowohl in der medialen als auch in der sportaktiven Öffentlichkeit weithin unbekannt. Der paralympische Behindertensport wird zunehmend stärker beachtet, der deaflympische hingegen insgesamt wenig. Im vorliegenden Beitrag wird deshalb ein Überblick gegeben. Es wird zunächst dessen geschichtliche Entwicklung näher beschrieben und verdeutlicht, dass der institutionalisierte Sport von Athleten mit Hörbeeinträchtigungen die längste Tradition im Behindertensport aufweist. Außerdem erfolgt eine Beschreibung der institutionellen Rahmenbedingungen dieser Sportbewegung. Dabei ist hervorzuheben, dass für die Belange des Leistungssports von Menschen mit Hörbeeinträchtigungen der Deutsche GehörlosenSportverband (DGS) und nicht der Deutsche Behindertensportverband (DBS) zuständig ist, da es sich beim deaflympischen Sport um eine historisch gewachsene, eigenständige Sportbewegung handelt. Die weiteren Ausführungen betreffen Aspekte zum Training und Wettkampf von Athleten mit Hörschädigungen. Um an Wettkämpfen im Gehörlosensport teilnehmen zu dürfen, muss ein Mindestmaß an Hörminderung vorliegen. Die Teilnahme an hochrangigen (internationalen) Wettkämpfen (z. B. Deaflympics) erfordert dann – wie im Spitzensport generell – ein systematisches und regelmäßiges Training. In Bezug auf Athleten mit Hörminderungen bedeutet dies insbesondere eine bestimmte Art der Kommunikation, des Vermittelns bzw. Erlernens (neuer) Bewegungen sowie die Beachtung etwaiger Einschränkungen der Gleichgewichtsfähigkeit. Auf diese Aspekte wird in diesem Übersichtsbeitrag ausführlich eingegangen. Es wird unter anderem verdeutlicht, dass „gehörlose“ Sportler häufig durchaus noch über ein Resthörvermögen verfügen. Außerdem erfordert das Training eine bestimmte Art der Kommunikation zwischen Trainer und Athlet, um effektiv im Sinne einer Leistungsentwicklung sein zu können. Schließlich gehen mit Höreinschränkungen häufig auch Defizite der Gleichgewichtsfähigkeit einher, die jedoch nicht a priori vorhanden sind und sich durch Training verbessern lassen.
Im Resümee wird nochmals herausgestellt, weshalb Grundkenntnisse über den deaflympischen Sport nicht nur zur Leistungsentwicklung hörgeschädigter Spitzenathleten beitragen, sondern auch Trainern und Vereinen des Regelsportsystems des Deutschen Olympischen Sportbundes (DOSB) nützlich sein können.
Selective Laser Sintering (SLS) is an additive manufacturing technique whereby a laser melts polymer powder layer by layer to generate three-dimensional parts. It enables the fabrication of parts with high degrees of complexity, nearly no geometrical restrictions, and without the necessity of a tool or a mold. Due to the orientation in the building space, the processing parameters, and the powder properties, the resulting parts possess an increased surface roughness. In comparison to other manufacturing techniques, e.g. injection molding, the surface roughness of SLS parts results from partially melted powder particles on the surface layer. The actual surface roughness must thus be characterized with respect to the part's eventual application. At the moment, there is no knowledge regarding which measuring technique is most suitable for detecting and quantifying SLS parts' surface roughness. The scope of this paper is to compare tactile profile measurement methods, as established in industry, to optical measurement techniques such as Focus Variation, Fringe Projection Technique (FPT), and Confocal Laser Scanning Microscope (CLSM). The advantages and disadvantages of each method are presented and, additionally, the effect of tactile measurement on a part's surface is investigated.
Powder based Additive Manufacturing technologies offer huge potential for building parts with almost no geometrical restrictions, but both the process controlling as well as the part properties are strongly dependent on different material characteristics of the material, like the flowability. In this work, different weight percentages of nano-scaled silica dioxide particles (Aerosil®) are admixed to pure polyethylene and polypropylene powder and the resulting flowability is determined. Besides using the Hausner ratio as standardized value, the degree of coverage is introduced as a new characteristic to quantify the powder flowability. The degrees of coverage are compared to the Hausner ratios to allow a discussion and evaluation about the different characteristic values. Additionally, tensile bars consisting of polypropylene are generated to determine the porosity by cross sections and the mechanical part properties by tensile testing. As mechanical part properties, the tensile strength and elongation at break are determined and the effects of different powder flowability on these properties are analyzed.
This study increases the basic understanding of optical material properties of polymer powders used in selective laser sintering (SLS). Therefore, different polymer powder materials were analyzed regarding their optical material properties with an integration spheres measurement setup. By the measurements a direct connection between the absorption behavior of the solid material and the overall optical material characteristics of the same material in powdery form could be shown. The results were used to develop an advanced explanation model for the optical material properties of powders. At present, existing explanation models only consider the occurring of multiple reflections in the gaps between the particles to explain the overall optical material properties of powder materials. Thus, by also considering the absorption behavior of the single particles, the basic understanding of the beam-matter interaction and their effect on the optical material properties of powder materials can be expanded.
Automatic speech recognition (ASR) is not only becoming increasingly
accurate, but also increasingly adapted for producing timely, incremental output. However, overall accuracy and timeliness alone are insufficient when it comes to interactive dialogue systems which require stability in the output and responsivity to the utterance as it is unfolding. Furthermore, for a dialogue system to deal with
phenomena such as disfluencies, to achieve deep understanding of user utterances these should be preserved or marked up for use by downstream components, such as language understanding, rather than be filtered out. Similarly, word timing can be informative for analyzing deictic expressions in a situated environment and should
be available for analysis. Here we investigate the overall accuracy and incremental performance of three widely used systems and discuss their suitability for the aforementioned perspectives. From the differing performance along these measures we provide a picture of the requirements for incremental ASR in dialogue systems and describe freely available tools for using and evaluating incremental ASR.
Most modern and post-modern poems have developed a post-metrical idea of lyrical prosody that employs rhythmical features of everyday language and prose instead of a strict adherence to rhyme and metrical schemes. This development is subsumed under the term free verse prosody. We present our methodology for the large-scale analysis of modern and post-modern poetry in both their written form and as spoken aloud by the author. We employ language processing tools to align text and speech, to generate a null-model of how the poem would be spoken by a naïve reader, and to extract contrastive prosodic features used by the poet. On these, we intend to build our model of free verse prosody, which will help to understand, differentiate and relate the different styles of free verse poetry. We plan to use our processing scheme on large amounts of data to iteratively build models of styles, to validate and guide manual style annotation, to identify further rhythmical categories, and ultimately to broaden our understanding of free verse poetry. In this paper, we report on a proof-of-concept of our methodology using smaller amounts of poems and a limited set of features. We find that our methodology helps to extract differentiating features in the authors’ speech that can be explained by philological insight. Thus, our automatic method helps to guide the literary analysis and this in turn helps to improve our computational models.
Predictive incremental parsing produces syntactic representations of sentences as they are produced, e.g. by typing or speaking. In order to generate connected parses for such unfinished sentences, upcoming word types can be hypothesized and structurally integrated with already realized words. For example, the presence of a determiner as the last word of a sentence prefix may indicate that a noun will appear somewhere in the completion of that sentence, and the determiner can be attached to the predicted noun. We combine the forward-looking parser predictions with backward-looking N-gram histories and analyze in a set of experiments the impact on language models, i.e. stronger discriminative power but also higher data sparsity. Conditioning N-gram models, MaxEnt models or RNN-LMs on parser predictions yields perplexity reductions of about 6%. Our method (a) retains online decoding capabilities and (b) incurs relatively little computational overhead which sets it apart from previous approaches that use syntax for language modeling. Our method is particularly attractive for modular systems that make use of a syntax parser anyway, e.g. as part of an understanding pipeline where predictive parsing improves language modeling at no additional cost.
n this paper we present a method to efficiently cull large parts of a scene prior to shadow map computations for many-lights settings. Our method is agnostic to how the light sources are generated and thus works with any method of light distribution. Our approach is based on previous work in culling for ray traversal to speed up area light sampling. Applied to shadow mapping our method works for high- and low-resolution shadow maps and, in contrast to previous work on many-lights rendering, does neither entail scene approximations nor imposes limits on light range, while still providing significant gains in performance. In contrast to standard culling methods shadow map rendering itself is sped up by a factor of 1.5 to 8.6 while the speedup of shadow map rendering, lookup and shading together ranges from 1.1 to 4.2
We present a method to compute post-processing depth of field (DOF) that produces more accurate results than previous approaches. Our method is based on existing approaches, namely DOF rendering by splatting and fast, tile-based particle accumulation. Using tile-based accumulation allows us to correctly sort out of focus pixels and apply proper alpha-blending to avoid artifacts commonly encountered with filter-based depth of field methods.
Parametric surfaces are an essential modeling tool in computer aided design and movie production. Even though their use is well established in industry, generating ray-traced images adds significant cost in time and memory consumption. Ray tracing such surfaces is usually accomplished by subdividing the surfaces on-the-fly, or by conversion to a polygonal representation. However, on-the-fly subdivision is computationally very expensive, whereas polygonal meshes require large amounts of memory. This is a particular problem for parametric surfaces with displacement, where very fine tessellation is required to faithfully represent the shape. Hence, memory restrictions are the major challenge in production rendering. In this paper, we present a novel solution to this problem. We propose a compression scheme for a-priori Bounding Volume Hierarchies (BVHs) on parametric patches, that reduces the data required for the hierarchy by a factor of up to 48. We further propose an approximate evaluation method that does not require leaf geometry, yielding an overall reduction of memory consumption by a factor of 60 over regular BVHs on indexed face sets and by a factor of 16 over established state-of-the-art compression schemes. Alternatively, our compression can simply be applied to a standard BVH while keeping the leaf geometry, resulting in a compression rate of up to 2:1 over current methods. Although decompression generates additional costs during traversal, we can manage very complex scenes even on the memory restrictive GPU at competitive render times.
Over the last decade a number of high performance, domain-specific languages (DSLs) have started to grow and help tackle the problem of ever diversifying hard- and software employed in fields such as HPC (high performance computing), medical imaging, computer vision etc. Most of those approaches rely on frameworks such as LLVM for efficient code generation and, to reach a broader audience, take input in C-like form. In this paper we present a DSL for image processing that is on-par with competing methods, yet its design principles are in strong contrast to previous approaches. Our tool chain is much simpler, easing the burden on implementors and maintainers, while our output, C-family code, is both adaptable and shows high performance. We believe that our methodology provides a faster evaluation of language features and abstractions in the domains above.
In this paper we show how a feature-oriented development methodology can be exploited to investigate a large set of possible implementations for a real-time rendering algorithm. We rely on previously published work to explore potential dimensions of the implementation space of an algorithm to be run on a graphics processing unit (GPU) using CUDA. The main contribution of our paper is to provide a clear example of the benefit to be gained from existing methods in a domain that only slowly moves toward higher level abstractions. Our method employs a generative approach and makes heavy use of Common Lisp-macros before the code is ultimately transformed to CUDA.
Ausbeutendes Pflegesystem? Gerechtigkeitsprobleme der derzeitigen Organisation von Sorgearbeit
(2016)
Smart durch ARTSS
(2016)
Inklusion
(2016)
Leitfaden zum Aufbau Praxisbasierter Forschungsnetzwerke (PBFN) in den Gesundheitsfachberufen
(2016)
Als Systemerkrankung beeinflusst die neurologische Sprachstörung Aphasie nicht nur die Betroffenen sondern in vergleichbarem Maße auch die Angehörigen. Es zeigt sich, dass die Lebensqualität der Angehörigen beeinträchtigt ist. Ziel der qualitativen Studie ist es, die subjektive Perspektive der Angehörigen auf die eigene Situation näher zu betrachten.
In Deutschland wird in der Patientenversorgung zwischen stationärem und ambulantem Sektor getrennt. Dieser als Schnittstelle bezeichnete Übergang wurde im Rahmen von diversen Forschungsarbeiten bereits mehrfach untersucht. Dabei wurden Barrieren identifiziert, die den Übergang von der stationären in die ambulante Therapie in vielen Fällen erschweren. Für die Logopä-die, insbesondere für Menschen mit einer Aphasie, liegen dazu bislang jedoch keine hinreichenden Informationen vor. Daher wurden acht Interviews mit dem Ziel geführt, von Menschen mit einer Apha-sie einen ersten Einblick in den Übergangsprozess von einem Rehabilitationsbereich in den nächsten zu erhalten. Im Vordergrund stand dabei die Frage, welche Ressourcen, aber auch welche Hindernisse von den Betroffenen wahrgenommen wurden. Die Ergebnisse der Befragung ließen sich in die Kategorien Organisation, Emotionen, Information und Angehörige zusammenfassen. Innerhalb der Kategorien konnten deutliche schnittstellenbezogene Defizite, aber auch Hinweise auf eine Schnittstellenoptimie-rung identifiziert werden. Neben der unzureichenden Vorbereitung auf die Entlassung aus den Rehabili-tationskliniken, dem Fehlen wichtiger Informationen und dem lückenhaften Einbinden der Angehörigen gaben die Betroffenen auch infrastrukturelle Gegebenheiten als Barrieren der stationär-ambulanten Schnittstelle an. Ausgehend von den Barrieren werden Möglichkeiten dargestellt, die Hindernisse von einer stationären zur ambulanten Versorgung zu minimieren.