Refine
Year of publication
- 2019 (106) (remove)
Document Type
- Article (peer reviewed) (31)
- Conference Proceeding (29)
- Contribution to a Periodical (19)
- Part of a Book (15)
- Doctoral Thesis (5)
- Other (3)
- Book (2)
- Report (1)
- Working Paper (1)
Is part of the Bibliography
- no (106)
Keywords
- Bauproduktenrecht (5)
- Betreuung (3)
- International Classification of Functioning (3)
- Kindergarten (3)
- Kita (3)
- Kleinstkinder (3)
- Krippe (3)
- Praxisbegleitung (3)
- Bauprodukteverordnung (2)
- Comparability (2)
Institute
- Fakultät für Angewandte Gesundheits- und Sozialwissenschaften (27)
- Fakultät für Ingenieurwissenschaften (17)
- Fakultät für Holztechnik und Bau (12)
- Fakultät für Angewandte Natur- und Geisteswissenschaften (10)
- Fakultät für Betriebswirtschaft (7)
- Forschung und Entwicklung (5)
- Zentrum für Forschung, Entwicklung und Transfer (4)
- Fakultät für Informatik (3)
- Fakultät für Wirtschaftsingenieurwesen (3)
- Fakultät für Sozialwissenschaften (2)
The machining of furniture components requires a complete geometrical specification of dimensions with associated tolerances to define in the first step appropriate manufacturing processes and to carry out the design and setup. Furthermore, a verification of the defined specification after production is a next step to check the quality and related to the production equipment, to calculate process capability and effectiveness.
In 2011, the system of Geometrical Product Specification (GPS) has been published completely. The system is built for a distinct range of geometrical quality characteristics and consists of chains of standards referring to each other. With three elements of these chains, the product can be specified. With three additional elements, the verification of the product and the measuring system is possible. It does not contain the verification of the production processes, the process qualification.
The German standard DIN 68100 provides a frame for dimensioning and tolerancing lengths and also angles, parallelism and straightness for furniture components. It includes a method to consider swelling and shrinking of wood and wood-based materials. Apart from its benefit of specifying products in a supply chain, it is rarely used in the branch nowadays. DIN 68100 is currently far away from GPS.
To gain all benefits arising from GPS the branch needs a consolidated action. New symbols for drafts and an updated method for tolerancing with moisture consideration have to be developed as well as measuring dimensions of flexible work pieces, like most parts in the furniture industry. In GPS, the link between product verification and process specification is not that consistent. In other branches SPC and methods of process qualifications are established. At the moment the kitchen industry is establishing SPC or similar procedures. An action towards GPS can also accompany these efforts.
Due to a shift in the tolerance principles, the GPS system is hardly directly applicable to the furniture industry, e.g. DIN 68100. But there is a real threat that the system is called up accidentally and out of ignorance.
This paper sketches a comprehensive system of product specification, verification for furniture components and process qualification. It will share experience gained in the kitchen furniture industry and will add a theoretical analysis of a possible application of GPS-chains for important examples of quality characteristics. The activities are part of a process (VDI 3415-2) carried out in the working group 102 of the Society of German Engineers (VDI).
This research presents a decision support methodology for the multi-criteria supplier selection and order allocation problem. The proposed approach supports purchasing managers in assembling mid-term supplier portfolios while making them aware of the trade-offs between the supplier sustainability, the purchasing costs, and the overall supply risk. First, we propose a multi-objective optimization model with three objectives: to maximize the supplier sustainability, to select the supplier portfolio with the lowest purchasing costs, and to minimize the supply risk. Our model extends existing mathematical approaches that follow the portfolio theory fathered by H. Markowitz by integrating the aspect ‘risk’ into the supplier selection problem. Secondly, since we allow for integer variables in our model—in contrast to the classical Markowitz portfolio theory—we use the ε-constraint method to visualize the efficient surface. The possibility of considering the non-dominated set of supplier portfolios is advantageous for purchasing managers as they gain a picture of the different optimal supplier portfolios and are able to analyze the trade-offs between the different purchasing goals before making a decision. Finally, we illustrate the applicability of the proposed methodology in a real-world supplier selection and order allocation case from the automotive industry. In the example case, we identify 1754 optimal supplier portfolios that may be assembled based on the eight available suppliers. Our analyses show that each optimal portfolio consists of two suppliers, with one specific supplier being included in each portfolio. Furthermore, four suppliers are not part of any optimal solution.
At present, there is no globally accepted standard for the allocation of greenhouse gas (GHG) emissions to shipments in road freight transportation. The only official international standard for emission calculation of transport operations is the European Norm EN-16258. However, even this norm still allows choosing from several alternative emission allocation schemes. This research aims to harmonize the process of GHG declarations for supply chains by identifying among all allocation units specified by EN-16258 the one that describes a shipment's contribution to GHG emissions best. For this purpose, concepts of the cooperative game theory are used. First, we develop three transport scenarios that allow studying a shipment's effect on GHG: a vehicle routing problem, a network flow model, and a mixed scenario. Our approach extends previous research projects because we take into account that shipment characteristics in terms of origin, destination, weight, and volume consume transport capacities to different degrees, impact the routing of the commercial vehicles and, thus, determine GHG. Second, we present the results of a computational study that bases on the introduced transport scenarios and that compares the allocation vectors resulting from the EN-16258 allocation rules with the Shapley value, which serves as a benchmark for a shipment's contribution to GHG. Furthermore, we show how often the EN-16258 allocation principles are in line with a set of game theoretical fairness criteria. The results indicate that the allocation unit ‘distance’ is the closest to the game theory benchmark and most often in line with game theoretical fairness criteria.
This research presents a novel, state-of-the-art methodology for solving a multi-criteria supplier selection problem considering risk and sustainability. It combines multi-objective optimization with the analytic network process to take into account sustainability requirements of a supplier portfolio configuration. To integrate ‘risk’ into the supplier selection problem, we develop a multi-objective optimization model based on the investment portfolio theory introduced by Markowitz. The proposed model is a non-standard portfolio selection problem with four objectives: (1) minimizing the purchasing costs, (2) selecting the supplier portfolio with the highest logistics service, (3) minimizing the supply risk, and (4) ordering as much as possible from those suppliers with outstanding sustainability performance. The optimization model, which has three linear and one quadratic objective function, is solved by an algorithm that analytically computes a set of efficient solutions and provides graphical decision support through a visualization of the complete and exactly-computed Pareto front (a posteriori approach). The possibility of computing all Pareto-optimal supplier portfolios is beneficial for decision makers as they can compare all optimal solutions at once, identify the trade-offs between the criteria, and study how the different objectives of supplier portfolio configuration may be balanced to finally choose the composition that satisfies the purchasing company's strategy best. The approach has been applied to a real-world supplier portfolio configuration case to demonstrate its applicability and to analyze how the consideration of sustainability requirements may affect the traditional supplier selection and purchasing goals in a real-life setting.
Ziel des Beitrags ist ein integrativer Überblick sowie eine Analyse der Chancen und Risiken von F&E-Aktivitäten multinationaler Pharmaunternehmen in Schwellenländern. Dafür wurde die Literatur zu F&E-Aktivitäten von Big-Pharmaunternehmen und deren Internationalisierung in Schwellenländern systematisch ausgewertet. Das traditionelle F&E-Modell, bei dem Big-Pharmaunternehmen ihre F&E selbst und zentralisiert durchführen, wird durch neue F&E-Ansätze in den Schwellenländern ergänzt. Diese umfassen die Diversifizierung des Produktportfolios, Outsourcing und Kooperationen sowie die Entstehung von Innovationsclustern in Schwellenländern. Aus der Kombination dieser Ansätze ergeben sich drei neue Geschäftsmodelle für Big-Pharmaunternehmen.
Virtual teams that use integrated communication technologies are ubiquitous in cross-border collaboration. This study explored media use and communication performance in multilingual virtual teams. Based on surveys from 96 virtual teams (with 578 team members), the research showed that more time spent in synchronous communication channels such as online conferences increased inclusion and satisfaction. Team members with lower language proficiency felt less included in synchronous and asynchronous collaboration, whereas team members with higher language proficiency felt less satisfied with asynchronous collaboration. Also, limited language proficiency speakers were significantly less likely to view synchronous tools as helpful for their teams to reach a mutual decision. Our data supports Media Synchronicity Theory (MST) for native and highly proficient English speakers. However, MST needs to be adjusted to account for different levels of language proficiency.
Purpose
The purpose of this paper is to investigate how professional football clubs from the English Premier League, German Bundesliga and Spanish Primera División use digital media to expand their international reach in emerging football markets (EFM) outside of Europe. Based on the EPRG framework and Rugman’s home-region hypothesis, the aim is to broaden the perspective where “sports go global” for a further understanding of actors’ international orientation in the digital sphere.
Design/methodology/approach
The study is based on data from desk research and a qualitative survey, comprising information on international digital media activities of 58 European clubs. Cluster analysis is used to identify different international orientations with regard to digital media activities.
Findings
The data provide evidence that clubs differ strongly in their orientations towards EFM. While some global players that provide digital media content in several EFM languages and attract a large share of Facebook followers from EFM exist, other clubs focus on their home region. League-specific differences become apparent.
Originality/value
This study determines the international online orientations of European football clubs by combining two previously separated research streams in football management studies: internationalisation and digital media activities. Most clubs with a strong EFM fan base choose polycentric, multi-language digital media strategies, followed by geocentric, standardised approaches. By offering a novel angle on internationalisation in professional football, this study contributes towards optimising clubs’ international online strategies for EFM, which are markets that promise high growth rates.
Es wird ein Verfahren zum Simulieren von Vorgängen vorgestellt, das auf der Messung des Eingriffs in ein reales System und dessen gemessener Reaktion beruht. Das Verfahren ist vollständig transparent, denn die notwendigen theoretischen Voraussetzungen zum Verständnis des Verfahrens bestehen nur aus Lehrinhalten aus dem Bachelorstudium für Ingenieure: das Lösen linearer DGLs mit konstanten Koeffizienten, die Laplace-Transformation und die Methode der kleinsten Quadrate. Es kommt ohne nichtlineare Optimierer oder Iterationen aus, weil die Lösung in zwei analytischen Gleichungen ausgedrückt werden kann. Das
Verfahren hat in der Praxis ein großes Einsatzspektrum, denn es ist auf stabile und instabile Systeme mit und ohne Dämpfung anwendbar. Für die Wahl des Eingangssignals gibt es keine Einschränkung und die Anfangsbedingungen können mitgeschätzt werden. Das Verfahren kann im offenen und im geschlossenen Regelkreis eingesetzt werden.
Zur Lösung einer regelungstechnischen Entwurfsaufgabe benötigt man ein Simulationsmodell der Regelstrecke. Entweder benötigt man es, weil es die meisten Entwurfsverfahren zur Bestimmung der Reglerparameter als gegeben voraussetzen oder weil man das Verhalten des Regelkreises zur Verifikation des gewünschten Verhaltens gefahrlos simulieren möchte, bevor man den Regler in dem realen geschlossenen Regelkreis verwendet. Die Motivation zur Entwicklung des hier vorgestellten Verfahrens ist es, Bachelor-Studenten der Ingenieurswissenschaften aus dem höheren Semester, die zum ersten Mal mit Regelungstechnik in Berührung kommen, ein möglichst allgemein anwendbares Verfahren zur Bildung von Streckenübertragungsfunktionen an die Hand zu geben, das von der Theorie her vollständig transparent und allein mit dem aus der Mathematik bisher Gelernten „verstehbar“ für sie ist. Grundsätzlich gibt es zwei verschiedene Arten, ein Simulationsmodell der Regelstrecke zu gewinnen: Bei der physikalischen oder theoretischen Modellbildung werden physikalische Erhaltungssätze auf den idealisierten sprich vereinfachten technischen Vorgang angewendet und nach den gewählten Ein- bzw. Ausgangssignalen umgeformt. Diese Vorgehensweise kann sehr zeitaufwendig sein und hinzukommt, dass Parameter des Modells meist unbekannt sind. Bei der experimentellen Modellbildung geht man von einem gemessenen Ein- bzw. Ausgangssignal aus und gleicht über mehr oder weniger aufwändige Optimierungsverfahren die simulierte Antwort an die gemessene Antwort an.
Der Einsatz künstlicher Intelligenz (KI) wird unsere Lebenswirklichkeit nachhaltig beeinflussen. Mittels technischer Verfahren maschinellen Lernens oder tiefer neuronaler Netze können Modelle aus existierenden Datenbeständen gelernt und Vorhersagen abgeleitet werden. Wir stellen skizzenartig zwei aktuelle Projekte vor, die sich u.a. mit den Potentialen und der menschzentrierten Gestaltung von Systemen künstlicher Intelligenz beschäftigen. Wir betten dies schlaglichtartig in den Kontext der aktuellen Debatte über die Entwicklung und Gestaltung von KI Systemen ein. Im Zuge dessen umreißen wir Herausforderungen und Chancen für die partizipative und sozialverantwortliche Technologieentwicklung von komplexen und auf Verfahren der künstlichen Intelligenz basierenden Systemen. Schließlich fokussieren wir dabei auf die Arbeitsebene und beschreiben kurz, welche konkreten Aktivitäten wir planen und stellen diese zur Diskussion.
In modern medicine, Clinical Practice Guidelines (CPGs) are well-established resources for the appropriate treatment of diseases. Evidence-based CPGs contain recommendations which are based on the state of the art and which have been achieved by consensus of several experts. Nevertheless, there is a potential for problems in translating guideline documents into specific actions for physicians. Therefore we propose to formalize the treatment process in an understandable representation as UML activities together with a domain expert. This formalization serves as a basis for the transfer of knowledge into a model, in this case PROforma, which directly allows execution in an interactive assistance software. The results of this work are part of an ongoing research project on the treatment of colon cancer based on the corresponding evidence-based CPG.
Probabilistic Estimation of Human Interaction Needs in Context of a Robotic Assistance in Geriatrics
(2019)
The key purpose of assistance robots is to help people coping with work-related or everyday tasks. To ensure an intuitive and effective support by an assistance robot, its expectation conform behavior is essential. In particular, when using assistance robots in geriatrics to assist elderly patients, special attention to the human-robot interaction should be paid. In order to help elderly patients maintain their independence and abilities as much as possible, the robot should only intervene when its support is needed. Therefore, the continuous estimation of the patient’s need for interaction is of particular importance. For enabling suitable models to estimate this need, we elaborate the use of Bayesian Networks. The analysis of our results seems promising, yielding a robust and practical approach.
With the digital transformation of companies, ever larger amounts of data are generated and available for analysis. Process mining techniques can be used to extract and analyze process models from these data. Related techniques have quickly developed into an important field with constantly increasing investments in recent years. Thus, the automated analysis of processes has gained an important role in many companies. In this context, graphs have been shown to be an intuitive representation of how the gathered processes are carried out using the aforementioned techniques. For the analysis of these so-called control flow graphs, we investigate the use of convolution neural networks, which are specially designed for graphs: graph convolution networks (GCNs). In our contribution, GCNs are used to perform a regression task based on individual control flows of a process in which farmers apply for specific governmental payments. The approach achieved promising results on this publicly available data set.
This thesis investigates the measurement and prediction of machinery noise in timber-frame buildings. To quantify the structure-borne sound power input from multi-point sources, simplified approaches were assessed that reduce the required data for out-of-plane force excitation. This identified approaches that give estimates within ±5 dB from 20 Hz to 2000 Hz. To investigative the importance of out-of-plane moment excitation, inverse methods were used to determine the power input; these were affected by noise but processing was used to overcome this shortcoming. A series of experimental investigations were carried out on a timber-frame structure undergoing mechanical point excitation. The driving-point mobility showed orthotropic plate characteristics at low frequencies, ribbed-plate characteristics in a narrow frequency band and infinite plate characteristics in mid- and high-frequency ranges. The moment mobility above or in-between studs was similar to infinite beam or plate theory with interpolation between these theories where necessary. The experimental work indicated the potential to use Statistical Energy Analysis (SEA) to predict sound transmission. The first experimental finding was that above the mass-spring-mass resonance frequency, the vibrational response of the wall leaves was uncorrelated. The second was a significant decrease in vibration across the wall from the excitation point, with structural intensity showing a decrease in net power flow across successive timber studs. The third was that tongue and groove connections between chipboard sheets significantly reduce the vibration transmission above 500 Hz. This led to different SEA models being used to model a timber-frame wall undergoing mechanical point excitation. A 41-subsystem model was found to be necessary to reproduce the measured vibration levels on both leaves within 10 dB. As there is a significant decrease in vibration with distance in the mid- and high-frequency range, the region close to the excitation point is particularly important and the SEA model has better accuracy in this region. An alternative engineering approach to the prediction of machinery noise in timber-frame buildings was introduced and validated that used measured transmission functions between the injected power and the spatial-average sound pressure level in a receiving room. A field survey and case studies indicate this is a feasible and practical approach.
Providing a subset of previously studied items as retrieval cues can both impair and improve memory for the remaining items. Here, we investigated such part-list cuing effects in younger and older adults’ episodic recall, using listwise directed forgetting to manipulate study context access at test. When context access was maintained, part-list cuing impaired recall regardless of age. In contrast, when context access was impaired, part-list cuing improved recall in younger but not in older adults. The results are consistent with the proposal that older adults show intact inhibition and blocking of competing information, but reduced capability for episodic context reactivation.
In the project, domestic ventilation concepts with PushPull-Fans were investigated. These ventilation units are characterized by a periodically changing supply and exhaust air operation. To ensure balanced operation, these ventilation units are always used in pairs, with one unit being in the exhaust air mode and the other in supply air mode. Measurements were made in a research apartment. The results were used to validate a CFD simulation model for transient studies. A room-by-room layout with two units in one room and a cross-room ventilation layout with two connected units in two rooms were analysed. The age of air was used as the evaluation parameter. With the room-by-room ventilation layout a lower age of air in the bedroom compared to the cross-room layout was obtained. However, in the room-by-room layout, the air flow through the larger living room with connected corridor was poorer in the rear area. With the cross-room layout, the flow through the rooms was more uniform. With the cross-room layout, the average age of air in the apartment could be slightly reduced compared to the room-by-room layout.
In this study, an experimental apparatus is used to excite four U-tube-shaped liquid pistons connected in series, and to study their behaviour. Some of the gas spaces are heated to induce piston oscillations; in others, gas expansion is utilised to produce a refrigeration effect. It was discovered that the liquid piston surface would become unstable and turbulent at relatively low gas charge pressures (2 bar–3 bar). Cylindrical polyethylene floats were employed at each piston surface in order to reduce the area of the free surface of each piston and allow experiments to be conducted over a wide range of operating conditions. Experiments were carried out using gas charge pressures in the range of 1 bar–6 bar. The resulting liquid piston oscillations were measured and analysed to assess the impact of any developing piston instability. Evidence of a liquid piston acceleration limit, likely resulting from the Rayleigh-Taylor instability phenomenon, is consistently observed during the experiments. The use of submerged polyethylene piston floats is found to increase the surface stability and enable maximum accelerations of 25 ms−2 to 30 ms−2.
In March 2019, German-speaking scientists and scholars calling themselves Scientists for Future, published a statement in support of the youth protesters in Germany, Austria, and Switzerland (Fridays for Future, Klimastreik/Climate Strike), verifying the scientific evidence that the youth protestors refer to. In this article, they provide the full text of the statement, including the list of supporting facts (in both English and German) as well as an analysis of the results and impacts of the statement. Furthermore, they reflect on the challenges for scientists and scholars who feel a dual responsibility: on the one hand, to remain independent and politically neutral, and, on the other hand, to inform and warn societies of the dangers that lie ahead.
Wide-bandgap semiconductors such as Silicon Carbide (SiC) or Gallium Nitride (GaN) enable fast switching and high switching frequencies of power electronics. However, this potential can not be exploited due to limitations caused by parasitic elements of packaging and interconnections. This paper shows a possibility to minimize parasitic elements of a half-bridge switching cell with 650 V GaN dies integrated into a printed circuit substrate. A sub-nH commutation loop of 0.5 nH inductance gives superior switching characteristics compared to circuits with packaged dies. Simulation and experimental results of an inverse double pulse test confirm our expectations. This study further reveals additional benefits of the proposed technology in terms of mechanical stability and thermal interfacing to heat sinks compared to circuits with packaged dies.
Power cycling and temperature endurance test of a GaN switching cell with substrate integrated chips
(2019)
We present a reliability study of a half-bridge switching cell with substrate integrated 650 V GaN HEMTs. Power Cycling Testing with a ΔTj of 100 K has revealed thermo-mechanically induced failures of contact vias after more than 220 kcycles. The via failure mode of contact opening is confirmed by reverse-bias pulsed IV-measurements to be primarily triggered by a ΔTj imposed thermal gradient and not by a high Tj. The chip electrical characteristics, however, remained unaffected during Power Cycling. Furthermore, a High Temperature Storage test at 125 °C for 5000 h has shown no changes in the electrical performance of substrate integrated GaN HEMTs.
This paper proposes an ultra-low inductance half-bridge switching cell with substrate integrated 650V GaN bare dies. A vertical parallel-plate waveguide structure with 100 μm layer thickness results in a commutation loop inductance of 0.5 nH resulting in a negligible drain-source voltage overshoot in the inductive load standard pulse test. On the other hand reliable circuit operation requires an assessment of the isolation strength of the thin dielectric layer in the main commutation loop, because critical high local electric fields might occur between the pads. Measurements of the dielectric breakdown voltage followed by a statistical failure analysis provide a characteristic life of 14.7 kV and a 10% quantile of 13.5kV in the Weibull fitted data. This characteristic life depends strongly on the ambient temperature and drops to 4.1kV at 125°C. Additionally, ageing tests show an increasing in dielectric breakdown voltage after 500h, 1000h and 2000h at 125°C high-temperature storage due to resin densification processes.
A discrete-time design method for a robust current controller of a servo drive has been developed. It takes the sampling time, the processing dead time and the dynamic behavior of the A/D converter into account. The theoretical calculations are verified using a test stand for high dynamics. The test stand includes a voice coil motor and power electronics with Gallium Nitride (GaN) power semiconductors for switching frequencies of more than 100 kHz. The bandwidth of the current control loop can be improved from typically 1 kHz to 1.5 kHz with insulated-gate bipolar transistor (IGBT) power semiconductors in state-of-the-art motion control systems to 10 kHz and more.
A calculation method for a robust servo controller design depending on the sampling time and the processing dead time was developed for mechanically stiff drives. With a test stand for high dynamic and high positioning accuracy, the theoretical calculations for the high bandwidth improvements are verified. The test stand includes a voice coil motor and a power electronic with Gallium Nitride (GaN) power semiconductors for switching frequencies of more than 100kHz.
Objective
To validate the International Classification of Functioning, Disability and Health (ICF) Generic-6 in daily routine clinical practice in Mainland China. Specific objectives were to analyze (1) interrater reliability, (2) convergent validity, (3) known group validity, and (4) predictive validity of the ICF Generic-6.
Design
Multicenter prospective cohort study.
Setting
Fifty hospitals from 20 provinces of Mainland China.
Participants
A total of 4510 patients from departments of rehabilitation, orthopedics, neurology, cardiology, pneumology, and cerebral surgery of the participating hospitals with different health conditions were included in this study.
Intervention
Not applicable.
Main Outcome Measures
The assessment was undertaken by nurses with ICF Generic-6 in combination with a numeric rating scale. Interrater reliability was evaluated with intraclass correlation coefficients (ICC). Convergent validity was evaluated with Spearman correlation coefficients between ICF Generic-6 and Medical Outcomes Short Form (SF)-12 items. Known group validity was examined by comparing discharge scores between different discharge destinations. Predictive validity was determined by using ICF Generic-6 baseline scores for estimating length of hospital stay with a loglogistic survival model with gamma shared frailty and cost of in-hospital treatment with a mixed effects generalized linear regression model of the gamma family.
Results
The interrater reliability of items and score of ICF Generic-6 was good with ICCs ranging from 0.67-0.87. ICF Generic-6 items were further correlated with respective SF-12 items. Discharge scores of patients differed significantly by discharge destination. The ICF Generic-6 admission score was a significant predictor of length of stay and treatment cost.
Conclusions
The ICF Generic-6 administered in combination with a 0-10 numeric rating scale is a reliable and valid tool for the collection of minimal information on functioning across various clinical settings.
Background: The International Classification of Functioning, Disability and Health is the international standard for describing and monitoring functioning. While the categories, the units of the classification, were not designed with measurement in mind, the hierarchical structure of the classification lends itself to the possibility of summating categories into some higher order domain. Focusing on the chapters of d4 Mobility, d5 Self-Care and d6 Domestic Life, this study seeks to ascertain if qualifiers rating of categories (0-No problem to 4-Complete problem) within those chapters can be summated, and whether such derived measurement is consistent with estimates obtained from well-known instruments which purport to measure the same constructs.
Methods: The current study applies secondary analysis to data previously collected in the context of validating Core Sets for stroke, rheumatoid arthritis, and osteoarthritis. Data included qualifier-based ratings of the categories in the Core Sets, and the physical functioning sub-scale of the Short-Form 36, and the World Health Organization Disability Assessment Schedule 2.0. To examine qualifier-comparator scale item agreement Kappa statistics were used. To identify whether appropriate gradients of the comparator scales were observed across qualifier levels, an Independent Sample Median Test of the ordinal scores was deployed. To investigate the internal validity of the summated ICF categories, the Rasch model was applied.
Results: Data from 2,927 subjects from Europe, Australasia, Middle East and South America were available for analysis; 36.3% had experienced a stroke, 35.8% osteoarthritis, and 27.9% had rheumatoid arthritis. The items from the Short-Form 36 could not match directly the qualifier categories as the former had only 3 response options. The Kappa between World Health Organization Disability Assessment Schedule 2.0 items and categories was low. For all qualifiers, a significant (<0.001) overall gradient was observed across the comparator scales. Only in few of the World Health Organization Disability Assessment Schedule 2.0 items could no discrete level be detected. The aggregation of the qualifiers at the Chapter and higher order levels mostly revealed fit to the Rasch model. Almost all ICF qualifiers showed ordered thresholds suggesting that the current structure and response options of the qualifiers worked as intended.
Conclusions: The findings of this study provide supporting evidence for the use of the professionally rated categories and associated qualifiers to measure functioning.
Implication for Rehabilitation
- This study provides evidence that functioning data can be collected directly with the International Classification of Functioning, Disability and Health (ICF) by using the ICF categories as items and the ICF qualifiers as rating scale.
- The findings of this study show the aggregated ratings of ICF categories from the chapters d4 Mobility, d5 Self-care, and d6 Domestic life capture a broader spectrum of the construct than the corresponding summated items from the SF36-Physical Function sub-scale and the corresponding items of the World Health Organization Disability Assessment Schedule 2.0.
- This study illustrates the potential of building quantitative measurement by aggregating ICF categories and their qualifier ratings into meaningful domains.
Abstract
Objective: Since the 1990s the Functional Independence Measure (FIM™) was believed to measure 2 different constructs, represented by its motor and cognitive subscales. The practice of reporting FIM™ total scores, together with recent developments in the understanding of the influence of locally dependent items on fit to the Rasch model, raises the question of whether the FIM™ 18-item version can be reported as a unidimensional interval-scaled metric.
Design: Rasch analysis of the FIM™ using testlet approaches to accommodate local response dependency.
Patients: A calibration sample containing 946 cases of data from 11,103 patients undergoing neurological or musculoskeletal rehabilitation in Switzerland in 2016.
Results: Baseline analysis and the traditional testlet approach showed no fit with the Rasch model. When items were grouped into 2 testlets, fit to the Rasch model was achieved, indicating unidimensionality across all 18 items. A transformation table to convert FIM™ raw ordinal scores to the corresponding Rasch interval scaled values was created.
Conclusion: This study provides evidence that FIM™ total scores represent a unidimensional set of items, supporting their use in clinical practice and outcome reporting when applying the respective transformation table. This provides a basis for standardized reporting of functioning.
Lay Abstract
The aim of this study was to look in detail at the FIM™, an assessment tool often used for patients undergoing rehabilitation. Some users report the FIM™ as 2 scores: one related to motor tasks, the other to cognitive tasks; others recommend reporting it as a single score including both motor and cognitive tasks. This study explored whether it is statistically meaningful to sum all the points into a single FIM™ total score. The results support the current practice of summing the points into a single total score for patients undergoing musculo-skeletal and neurological rehabilitation. The results also allowed an interval scale to be derived from the FIM™, enabling a broad range of calculations to be made using the FIM™ score, such as calculating the change in FIM™ outcomes from the time a patient is admitted to a rehabilitation clinic until their discharge.
Background
Limitations in upper limb functioning are common in Musculoskeletal disorders and the Disabilities of the Arm, Shoulder and Hand scale (DASH) has gained widespread use in this context. However, various concerns have been raised about its construct validity and so this study seeks to examine this and other psychometric aspects of both the DASH and QuickDASH from a modern test theory perspective.
Methods
Participants in the study were eligible if they had a confirmed diagnosis of Rheumatoid Arthritis (RA). They were mailed a questionnaire booklet which included the DASH. Construct validity was examined by fit to the Rasch measurement model. The degree of precision of both the DASH and QuickDASH were considered through their Standard Error of Measurement (SEM).
Results
Three hundred and thirty-seven subjects with confirmed RA took part, with a mean age of 62.0 years (SD12.1); 73.6% (n = 252) were female. The median standardized score on the DASH was 33 (IQR 17.5–55.0). Significant misfit of the DASH and QuickDASH was observed but, after accommodating local dependency among items in a two-testlet solution, satisfactory fit was obtained, supporting the unidimensionality of the total sets and the sufficiency of the raw (ordinal or standardized) scores.
Conclusion
Having accommodated local response dependency in the DASH and QuickDASH item sets, their total scores are shown to be valid, given they satisfy the Rasch model assumptions. The Rasch transformation should be used whenever all items are used to calculate a change score, or to apply parametric statistics within an RA population.
Significance and innovations
Most previous modern psychometric analyses of both the DASH and QuickDASH have failed to fully address the effect of a breach of the local independence assumption upon construct validity.
Accommodating this problem by creating ‘super items’ or testlets, removes this effect and shows that both versions of the scale are valid and unidimensional, as applied with a bi-factor equivalent solution to an RA population.
The Standard Error of Measurement of a scale can be biased by failing to take into account the local dependency in the data which inflates reliability and thus making the SEM appear better (i.e. smaller) than the true value without bias.
Background: The Extended Barthel Index (EBI), consisting of the original Barthel Index plus 6 cognitive items, provides a tool to monitor patients’ outcomes in rehabilitation. Whether the EBI provides a unidimensional metric, thus can be reported as a valid sum-score, remains to be examined.
Objective: To examine whether the EBI can be reported as unidimensional interval-scaled metric for neurological and musculoskeletal rehabilitation.
Methods: Rasch analysis of a calibration sample of 800 cases from neurological or musculoskeletal rehabilitation in 2016 in Switzerland.
Results: In the baseline analysis no fit to the Rasch Model was achieved. When accommodating local dependencies with a testlet approach satisfactory fit to the Rasch Model was achieved, and an interval scale transformation table was created.
Conclusion: The results support the reporting of adapted EBI total scores for both rehabilitation groups by applying the interval scaled transformation table presented in this study.
Study design
Mapping of the National Spinal Cord Injury Model System (SCIMS) Database (NSCID) to the International Classification of Functioning, Disability and Health (ICF).
Objectives
To link the content of the latest two versions of the NSCID to the ICF; more specifically (1) to compare the content of the current NSCID 2016–2021 version to its predecessor (NSCID 2011–2016) using the ICF as a neutral reference framework, and (2) to compare the content contained in the NSCID 2016–2021 version with relevant ICF Sets.
Setting
The forms of the NSCID 2016–2021 and 2011–2016 versions were linked to the ICF and contrasted. Comparability of the current version of the NSCID with the ICF Core Set for Spinal Cord Injury (SCI) in the post-acute and long-term context and the two generic ICF sets— ICF Generic-7 and ICF Generic-30 was then examined.
Methods
ICF Linking Rules and descriptive statistics.
Results
The current NSCID 2016–2021 version covers functioning as classified in the ICF with 8 ICF categories more comprehensively than its predecessor does. More than 50% of ICF categories contained in the two ICF Generic Sets were covered. The coverage of the brief ICF Core Sets for SCI by the NSCID 2016–2021 was more than 50%, but the coverage of the comprehensive core sets was low. Results showed the best coverage in the ICF component Activities and Participation.
Conclusions
This study emphasizes how the ICF and its Sets can serve as a reference framework to foster comparability of existing data sets from both clinical practice and research.
LFT- „Hype or Hope“
(2019)
Ziel des Projektes ist es, ein Verfahren für elektrische Vorort-Leistungsanalysen von Photovoltaik (PV)-Anlagen zu entwickeln. Dazu werden die elektrischen Parameter von PV-Strings lokal und global mit Hilfe eines neuen Feldlaboransatzes mit Indoor-Laborgenauigkeit vermessen. Das neue Messverfahren ist normgerecht und wird helfen, die Standardisierung von Feldmessungen weiterzuentwickeln. Im Projekt werden bekannte Messmethoden der Photovoltaik, (IR-Thermografie, I-U-String-Messung, digitale Datenverarbeitung mittels Selbstreferenzierung) zu einem innovativen Gesamtkonzept synthetisch zusammengeführt. Die Entwicklung des Messverfahrens wird an speziell gealterten PV-Anlagen marktbestimmender Technologien (c-Si-, CdTe-, CIGS-Muster-PV-Anlagen) vorangetrieben. So wird das Verfahren auf Erkennung spezifischer Degradationsszenarien und marginaler quantitativer Parameterveränderungen trainiert und geschärft
Outdoor performance analyses of photovoltaic modules can be advantageous compared to indoor investigations, as they take into account the influences of natural test conditions on the modules. However, such outdoor performance assessments usually suffer from poor accuracies due to undefined test conditions for the modules. This paper reports on a comprehensive concept for improved outdoor analysis which results in performance data with indoor laboratory precision. The approach delivers current-voltage characteristics for even more test conditions than required by the standard IEC 61853-1. Hence, curves of modules’ electrical parameters above irradiance can be deduced for any temperatures. The concept allows precise determination of temperature coefficients for user-defined irradiances taking into account outdoor effects like light-soaking or light-induced degradation. The calibration and measurement uncertainty of the presented outdoor analysis method is evaluated quantitatively. For the measurements an advanced outdoor set-up was used.
When monitoring the energy performance of buildings, it may be of interest to identify the occupation periods of people in the room due to their possible impact on the energy balance. In order to be able to carry out a comprehensive energy assessment of the system and building, it is necessary to be able to classify user influence during the evaluation. This thesis investigates how the presence of people in a room can be determined cost-effectively and with little additional effort. The aim is to determine which sensors of a room control system provide sufficiently reliable data. The presence of 1-2 persons was examined on a test facility of the Technical University Rosenheim. The air-, mean radiation- and surface-temperatures, the air humidity as well as the CO2 and VOC concentrations were measured. For the analysis, a method of supervised machine and statistical learning, random forest, is used. The smallest model error detected in predicting the presence of 1 or 2 persons from CO2 sensor data is 1.43%. The error rates are low for all tested models if time-dynamic effects are used as predictors and the data is processed in a so-called time period form. Additionally, the ways in which this data should ideally be made available for future measurements and processed to facilitate analysis with machine and statistical learning techniques have been investigated. A further goal is to apply the models developed on measurement series in laboratory environments to real rooms and to assess the transferability of these models.
Telematik in der Arbeitsmedizin: Praktische Erfahrungen aus einer Machbarkeitsstudie der BGHM
(2019)
We employed the well-established Horton-Strahler, hierarchical, stream-order (ω) scheme to investigate scaling of nutrient loads (P and N) from ~845 wastewater treatment plants (WWTPs) distributed along the river network in urbanized Weser River, the largest national basin in Germany (~46K km2; ~8.4 million population). We estimated hydrologic and water quality impacts at the reach- and basin-scales, at two steady river discharge conditions (median flow, QR50; low-flow, QR90). Of the five WWTPs class-sizes (1 ≤ k ≤ 5), ~68% discharge to small low-order streams (ω < 3). We found large variations in capacity to dilute WWTP nutrient loads because of variability in (1) treated wastewater discharge (QU) within and among different class-sizes, and (2) river discharge (QR) within low-order streams (ω < 3) resulting from differences in drainage areas. For QR50, reach-scale water quality impairment assessed by nutrient concentration was likely at 136 (~16%) locations for P and 15 locations (~2%) for N. About 90% of these locations were lower-order streams (ω < 3). At QR50 and only with dilution, basin-scale cumulative nutrient loads from multiple upstream WWTPs increase impaired locations to 266 (~32% of total) for P. Considering in-stream uptake decreased P-impaired streams to 225 (~27%), suggesting the dominant role of dilution in the Weser River basin. Role of in-stream uptake diminished along the flow paths, while dilution in larger streams (4 ≤ ω ≤ 7) minimizes the impact of WWTP loads. Under QR90 conditions [(QR50/QR90) ~ 2.5], water quality impaired locations will likely double for the basin-scale analyses. Long-term water quality data suggested that diffuse sources are the primary contributors for water quality impairments in large streams. Our data-modeling synthesis approach is transferable to other urbanized river basins and extends understanding of point source impacts on water quality across spatial scales.