Erstveröffentlichungen
Refine
Year of publication
Document Type
- Article (49)
- conference proceeding (article) (16)
- conference proceeding (volume) (11)
- Book (1)
- Part of a Book (1)
- conference proceeding (presentation) (1)
Keywords
Institute
- Fakultät Informatik und Mathematik (42)
- Fakultät Maschinenbau (16)
- Regensburg Center of Biomedical Engineering - RCBE (12)
- Regensburg Medical Image Computing - ReMIC (9)
- Fakultät Elektro- und Informationstechnik (8)
- Fakultät Angewandte Sozial- und Gesundheitswissenschaften (7)
- Laboratory for Safe and Secure Systems - LAS3 (7)
- Labor Biomechanik - LBM (6)
- Regensburg Center of Health Sciences and Technology - RCHST (6)
- Labor für Technikfolgenabschätzung und Angewandte Ethik - LaTe (5)
Musculoskeletal research questions regarding the prevention or rehabilitation of the hand can be addressed using inverse dynamics simulations when experiments are not possible. To date, no complete human hand model implemented in a holistic human body model has been fully developed. The aim of this work was to develop, implement, and validate a fully detailed hand model using the AnyBody Modelling System (AMS) (AnyBody, Aalborg, Denmark). To achieve this, a consistent multiple cadaver dataset, including all extrinsic and intrinsic muscles, served as a basis. Various obstacle methods were implemented to obtain with the correct alignment of the muscle paths together with the full range of motion of the fingers. These included tori, cylinders, and spherical ellipsoids. The origin points of the lumbrical muscles within the tendon of the flexor digitorum profundus added a unique feature to the model. Furthermore, the possibility of an entire patient-specific scaling based on the hand length and width were implemented in the model. For model validation, experimental datasets from the literature were used, which included the comparison of numerically calculated moment arms of the wrist, thumb, and index finger muscles. In general, the results displayed good comparability of the model and experimental data. However, the extrinsic muscles showed higher accordance than the intrinsic ones. Nevertheless, the results showed, that the proposed developed inverse dynamics hand model offers opportunities in a broad field of applications, where the muscles and joint forces of the forearm play a crucial role.
Nach einer Fraktur ist Mobilisierung Behandlungsziel und Therapiesäule. Das Festlegen von Outcomes basiert jedoch auf vielen Unsicherheiten, da Assessments nicht für alle Patient/-innen geeignet sind. Sie können agesabhängig beeinflusst und subjektiv geprägt sein. Sensorbasiertes Bewegungsmonitoring bietet eine Ergänzung zur Operationalisierung der Gehfähigkeit. Für Längsschnittuntersuchungen, die auch im häuslichen Umfeld durchgeführt werden, eignet sich die tägliche Schrittzahl als Variable. Sie kann durch einen handelsüblichen Fitnesstracker beobachtet
werden.
We benchmark a new hybrid eye-tracker system against the DPI (Dual Purkinje Imaging) tracker and the Tobii Spectrum in a series of three experiments. In a first within-subjects battery of tests, we show that the precision of the new eye-tracker is much better than that of both the DPI and the Spectrum, but that accuracy is not better. We also show that the new eye-tracker is insensitive to effects of pupil contraction on gaze direction (in contrast to both the DPI and the Spectrum), that it detects microsaccades on par with the DPI and better than the Spectrum, and that it can possibly record tremor. In the second experiment, sensors of the novel eye-tracker were integrated into the optical path of the DPI bench. Simultaneous recordings show that saccade dynamics, post-saccadic oscillations and measurements of translational movements are comparable to those of the DPI. In the third experiment, we show that the DPI and the new eye-tracker are capable of detecting 2 arcmin artificial-eye rotations while the Spectrum cannot. Results suggest that the new eye-tracker, in contrast to video-based P-CR systems [Holmqvist and Blignaut 2020], is suitable for studies that record small eye-movements under varying ambient light levels.
Eye trackers are sometimes used to study the miniature eye movements such as drift that occur while observers fixate a static location on a screen. Specifically, analysis of such eye-tracking data can be performed by examining the temporal spectrum composition of the recorded gaze position signal, allowing to assess its color. However, not only rotations of the eyeball but also filters in the eye tracker may affect the signal’s spectral color. Here, we therefore ask whether colored, as opposed to white, signal dynamics in eye-tracking recordings reflect fixational eye movements, or whether they are instead largely due to filters. We recorded gaze position data with five eye trackers from four pairs of human eyes performing fixation sequences, and also from artificial eyes. We examined the spectral color of the gaze position signals produced by the eye trackers, both with their filters switched on, and for unfiltered data. We found that while filtered data recorded from both human and artificial eyes were colored for all eye trackers, for most eye trackers the signal was white when examining both unfiltered human and unfiltered artificial eye data. These results suggest that color in the eye-movement recordings was due to filters for all eye trackers except the most precise eye tracker where it may partly reflect fixational eye movements. As such, researchers studying fixational eye movements should be careful to examine the properties of the filters in their eye tracker to ensure they are studying eyeball rotation and not filter properties.
Characterizing gaze position signals and synthesizing noise during fixations in eye-tracking data
(2020)
The magnitude of variation in the gaze position signals recorded by an eye tracker, also known as its precision, is an important aspect of an eye tracker’s data quality. However, data quality of eye-tracking signals is still poorly understood. In this paper, we therefore investigate the following: (1) How do the various available measures characterizing eye-tracking data during fixation relate to each other? (2) How are they influenced by signal type? (3) What type of noise should be used to augment eye-tracking data when evaluating eye-movement analysis methods? To support our analysis, this paper presents new measures to characterize signal type and signal magnitude based on RMS-S2S and STD, two established measures of precision. Simulations are performed to investigate how each of these measures depends on the number of gaze position samples over which they are calculated, and to reveal how RMS-S2S and STD relate to each other and to measures characterizing the temporal spectrum composition of the recorded gaze position signal. Further empirical investigations were performed using gaze position data recorded with five eye trackers from human and artificial eyes. We found that although the examined eye trackers produce gaze position signals with different characteristics, the relations between precision measures derived from simulations are borne out by the data. We furthermore conclude that data with a range of signal type values should be used to assess the robustness of eye-movement analysis methods. We present a method for generating artificial eye-tracker noise of any signal type and magnitude.
Reading students’ faces and their body language, checking their worksheets, and keeping eye contact is a key trait of teacher competence. The new technology of mobile eye-tracking provides researchers with possibilities to explore teaching from the viewpoint of teacher gaze, but also introduces many new method questions. This study had the primary aim to investigate teachers´ attention distribution over space: the number and durations of several types of their gazes, and how their gaze depends on the factors of students´ gender, achievement, and position in the classroom. Results show that teacher gaze was distributed unevenly across both space and time. Teachers looked at the most-watched students 3-8 times more often than at the least-watched ones. Students sitting in the first row and the middle section received significantly more gaze than those sitting outside this zone. All three teachers made more single gaze visits - looking at the students but making no eye contact - than mutual gazes or student material gazes. The three teachers’ gaze distribution also varied substantially from lesson to lesson. Our results are important for understanding teacher behavior in real classrooms, but also point to the relevance of appropriate method design in future classroom studies with eye-tracking.
When retrieving image from memory, humans usually move their eyes spontaneously as if the image were in front of them. Such eye movements correlate strongly with the spatial layout of the recalled image content and function as memory cues facilitating the retrieval procedure. However, how close the correlation is between imagery eye movements and the eye movements while looking at the original image is unclear so far. In this work we first quantify the similarity of eye movements between recalling an image and encoding the same image, followed by the investigation on whether comparing such pairs of eye movements can be used for computational image retrieval. Our results show that computational image retrieval based on eye movements during spontaneous imagery is feasible. Furthermore, we show that such a retrieval approach can be generalized to unseen images.
For evaluating whether an eye-tracker is suitable for measuring microsaccades, Poletti & Rucci (2016) propose that a measure called ‘resolution’ could be better than the more established root-mean-square of the sample-to-sample distances (RMS-S2S). Many open questions exist around the resolution measure, however. Resolution needs to be calculated using data from an artificial eye that can be turned in very small steps. Furthermore, resolution has an unclear and uninvestigated relationship to the RMS-S2S and STD (standard deviation) measures of precision (Holmqvist & Andersson, 2017, p. 159-190), and there is another metric by the same name (Clarke, Ditterich, Drüen, Schönfeld, and Steineke 2002), which instead quantifies the errors of amplitude measurements. In this paper, we present a mechanism, the Stepperbox, for rotating artificial eyes in arbitrary angles from 1′ (arcmin) and upward. We then use the Stepperbox to find the minimum reliably detectable rotations in 11 video-based eye-trackers (VOGs) and the Dual Purkinje Imaging (DPI) tracker. We find that resolution correlates significantly with RMS-S2S and, to a lesser extent, with STD. In addition, we find that although most eye-trackers can detect some small rotations of an artificial eye, the rotations of amplitudes up to 2∘ are frequently erroneously measured by video-based eye-trackers. We show evidence that the corneal reflection (CR) feature of these eye-trackers is a major cause of erroneous measurements of small rotations of artificial eyes. Our data strengthen the existing body of evidence that video-based eye-trackers produce errors that may require that we reconsider some results from research on reading, microsaccades, and vergence, where the amplitude of small eye movements have been measured with past or current video-based eye-trackers. In contrast, the DPI reports correct rotation amplitudes down to 1′.
We present a sensitive UV LED photoacoustic setup for the detection of gaseous acetone and discuss its applicability towards breath analysis. We investigated the performance of the sensor for low acetone concentrations down to 0.1 parts per million (ppmV). The influences of temperature, flow, pressure, optical power and LED duty cycle on the measured signal have been examined. To gain a better understanding of the different effects on the photoacoustic signal, correlation analysis was applied and feature importance was determined using a large measured dataset. Furthermore, the cross-sensitivities towards O2, CO2 and H2O have been studied extensively. Finally, the sensor’s performance to detect acetone between 0.1–1 ppmV within gas mixtures simulating breath exhale conditions has been investigated, too. With a limit of detection (LoD) of 12.5 parts per billion (ppbV) (3σ) measured under typical breath exhale gas mixture conditions, the sensor demonstrated a high potential for the application of acetone detection in human breath analysis.