Refine
Year of publication
Document Type
- Conference Proceeding (90)
- Book (21)
- Article (peer reviewed) (11)
- Other (10)
- Contribution to a Periodical (9)
- Part of a Book (4)
- Doctoral Thesis (2)
- Working Paper (1)
Is part of the Bibliography
- no (148)
Keywords
- Speech Recognition (18)
- CFT (6)
- Software Development (6)
- Database (5)
- Microsoft (5)
- Sql server (5)
- Augmented Reality (AR) (4)
- Fault trees (4)
- Mobile Robot (4)
- Robotic Research (4)
Institute
- Fakultät für Informatik (148) (remove)
The use of multimedia can significantly improve the quality of case studies, especially with regard to their presentation of reality. The development of multimedia case studies poses a challenge of both a creative and a technical nature. This paper describes the various stages of the development of the case study itself as well as an action model which supports the application of didactical aims in a multimedia case study.
This contribution describes experiences with a multimedia case study used at the Department of Information Systems for training students in data processing for business purposes. The report includes a description of how the case study was integrated as a didactic element in a university course, with special emphasis being given to theoretical aspects of presentation and learning. Additionally, a description of the case study and its development rounds off the article. The experiences were gained within the framework of an explorational, empirical study whose results are presented at the end of this paper and form the basis of suggestions for how the case study could be developed further.
Ergebnis der Arbeit ist eine interdisziplinäre Zusammenschau zum Erkenntnisobjekt "multimediale Lern- und Masseninformationssysteme" und darauf aufbauend die Vorstellung eines Vorgehensmodell zur Konstruktion dieser Systeme und die Einbettung dieses Modells in ein bestehendes Systemplanungskonzept.
Dazu wird in der Arbeit zunächst der Begriff Multimedia anhand von allgemeinen Kriterien der Mensch-Maschine-Mensch - Kommunikation eingegrenzt und definiert. Ergebnis ist eine Systematik zur Einordnung von wissenschaftlichen Forschungsgebieten in diesem Bereich. Im zweiten Schritt erfolgt eine Untersuchung lerntheoretischer Konzepte anhand eines fünfdimensionalen Rasters. Ergebnis sind zwei Modelle, die sich zur elektronisch unterstützten Wissensvermittlung im allgemeinen eignen. Im dritten Schritt erfolgt eine Auseinandersetzung mit Informationsdarstellungen und Interaktionsmöglichkeiten an der Mensch-Maschine-Schnittstelle. Dazu wird ein konsistentes Raster erarbeitet, anhand dessen wahrnehmungspsychologische und technische Parameter von Informationsdarstellungen und Interaktionsmöglichkeiten untersucht werden. Ergebnis ist die Beschreibung von einem aus der Theater-, Film- und Fernsehbranche adaptierten Prozeß zur Erstellung eines multimedialen Drehbuchs, dem Storyboard. Der vierte Schritt beschreibt die Kombination der lerntheoretischen Konzepte mit dem Prozeß des Storyboardings multimedialer Anwendungen und die Einbettung in ein systemplanerisches Vorgehen dazu. Ergebnis ist ein Phasenkonzept zur Systemplanung multimedialer Lern- und Masseninformationssysteme.
The use of multimedia can significantly improve the quality of case studies, especially with regard to their presentation of reality. The development of multimedia case studies poses a challenge of both a creative and a technical nature. This paper describes the various stages of the development of the case study itself as well as an action model which supports the application of didactical aims in a multimedia case study. This paper then describes experiences with a multimedia case study used at the Department of Information Systems for training students in data processing for business purposes. The report includes a description of how the case study was integrated as a didactic element in a university course, with special emphasis being given to theoretical aspects of presentation and learning. Additionally, a description of the case study and its development rounds off the article. The experiences were gained within the framework of an explorational, empirical study whose results are presented at the end of this paper and form the basis of suggestions for how the case study could be developed further.
Die Arbeit ist eine Anleitung zur Planung und Erstellung von IT-Systemen, die auch Filme, Sprache, Musik, Animationen und virtuelle Welten – also die gesamte Palette multimedialer Darstellungen – beinhalten. Solche Programme werden hauptsächlich in drei Bereichen eingesetzt: Im Spielebereich (z.B. Autorennen, Erlebnisspiele), im Aus- und Weiterbildungsbereich (z.B. multimediale Lernprogramme) und im Bereich von elektronisch unterstützten Informationssystemen (z.B. Präsentationen, Produktvorstellungen und Produktkataloge u.a. auch auf CD-ROM oder Kiosksysteme wie etwa Informationsterminals auf Messen u.ä.).
Die Arbeit beschäftigt sich vorwiegend mit Lernsystemen. Das Schwierige an der Verwendung multimedialer Darstellungen wie Film, Musik usw. ist, dass in die "trockene" Programmierung Aspekte wie Regie, Dramaturgie, psychologische und didaktische Aspekte einfließen. Daher ist die interdisziplinäre Sichtweise bei der Entwicklung multimedialer Systeme besonders wichtig. Die Arbeit betrachtet die Problematik also nicht techniklastig (obwohl natürlich die technischen Aspekte nicht außer Acht gelassen werden), sondern nutzerorientiert.
Die Arbeit ist für drei Bereiche bedeutend: erstens für den Bereich der Wissenschaft(stheorie), zweitens für den Aus- und Weiterbildungssektor und drittens für die Entwickler von multimedialen Systemen.
Aus der Sicht der Wissenschaft liefert die Arbeit eine klare Einteilung für multimediale Arbeitsbereiche. Mögliche künftige Forschungsbereiche werden aufgezeigt und begriffliche Unklarheiten offengelegt und ausgeräumt. Weiters wird die interdisziplinäre Bandbreite der Thematik geschildert und damit eine umfassende Darstellung der Problematik vorgenommen. In diesem Sinn kann die Arbeit fast als Lexikon gesehen werden.
Aus der Sicht des Aus- und Weiterbildungssektors dient diese Arbeit als Zusammenfassung bestehender Lernkonzepte, in der ein eigenständiger, einfacher Raster in Hinblick auf multimediale Systeme entwickelt wird. Dabei werden Erfahrungen aus mehreren multimedialen Programmen eingearbeitet, die vom Autor am Institut für Wirtschaftsinformatik (mit)entwickelt und evaluiert worden sind. Eine Erkenntnis ist, dass der Wissensstand des Benutzers auf dem Lerngebiet eine entscheidende Rolle spielt. Lernsysteme für Anfänger müssen anders gestaltet werden als für Fortgeschrittene. Das wie wird in der Arbeit beschrieben.
Aus der Sicht der Entwickler von multimedialen Systemen ist die Arbeit von Bedeutung, da sie die Unterschiede bei der Entwicklung im Vergleich zu traditionellen Programmen herausarbeitet und dafür eine Vorgangsweise anbietet. Die Vorgangsweise ist eine Art multimediales Drehbuch, das als Storyboard bezeichnet wird. In der Arbeit wird beschrieben wie die einzelnen dramaturgischen, didaktischen und psychologischen Aspekte in das Storyboard eingearbeitet werden und letztlich das System in einem entsprechenden Umfeld eingesetzt werden kann.
This paper is an interdisciplinary synopsis about multimedia learning systems and mass information systems, and presents an action model for construction of these systems and the embedding of this model in an existing concept of system planning.
First step in the paper is the containment and definition of the term ‘multimedia’ on the basis of general criterions of the human-machine-human communication. Result is a taxonomy to classify scientific research areas in this field.
The second part follows an examination of different concepts of learning theories by means of a five dimensional raster. Output of this examination are two models fitting for computer based learning in general.
The third part discusses information presentation and interaction possibilities at the human - machine - interface. Therefore a consistent raster is acquired, where psychological perception parameters and technical parameters of information presentations and interaction possibilities are examined. This results in the so-called storyboarding, which is the description of a process adapted from theater, film and TV for developing a multimedia script.
The combination of learntheoretical concepts with the process of storyboarding of multimedia applications and the embedding in a system planning action model is described in the fourth part. The result of this last part shows a phase concept for system planning of multimedia learning systems and mass information systems.
Create, query, and manage high-performance SQL Server 7 databases with the hot new release of this scalable database management system.
SQL Server 7: A Beginner's Guide explains enterprise database fundamentals and shows you how to take advantage of the new features of SQL Server 7, such as improved space management, new data transformation services, and data warehousing.
From installation to query optimization to security to data warehousing, this hands-on guide provides everything you need to know to get up and running smoothly on SQL Server 7.
Mehrstufige Architekturen mit SQLJ und Enterprise JavaBeans Datenbankanbindung und Erstellung von Komponenten sind ein essentieller Bestandteil von Java-Anwendungen. Bei Java-Schnittstellen rückt der neue ANSI-Standard SQLJ für relationale und objektrelationale Datenbanken immer mehr in den Blickpunkt. Bei mehrstufigen Architekturen ist darüber hinaus die Komponentenbildung unabdingbar, und dabei entwickelt sich EJB zu einem De-facto-Standard. Nach einer grundlegenden Einführung geht es im zweiten Teil des Buches um Java-Schnittstellen zu Datenbanken: JDBC, ODMG und SQLJ8. Wie EJB-Komponenten erstellt werden und auf Java-Datenbanken zugreifen, wird im dritten Teil beschrieben.
We describe an Augmented Reality system using the corners of a color cube for camera calibration. In the augmented image the cube is replaced by a computer generated virtual object.
The cube is localized in an image by the CSC color segmentation algorithm. The camera projection matrix is estimated with a linear method that is followed by a nonlinear refinement step.
Because of possible missclassifications of the segmented color regions and the minimum number of point correspondences used for calibration, the estimated pose of the cube may be very erroneous for some frames; therefore we perform outlier detection and treatment for rendering the virtual object in an acceptable manner.
Es wird ein System aus der Augmented Reality (Erweiterte Realiäat, AR) vorgestellt, das zur Kamerakalibrierung einen farbigen Würfel verwendet, der außerdem als Platzhalter für ein virtuelles Objekt dient.
Zur Erkennung des Würfels in einer Szene werden Methoden aus der Farbbildverarbeitung verwendet, wie der Color Structure Code (CSC) und die Klassifikation der entstehenden Regionen nach ihrer Farbe. Zur Beschleunigung der Segmentierung wird ein hierarchisches Verfahren eingesetzt.
Die typischerweise gewünschten Einsatzgebiete für Dienstleistungsroboter, z. B. Krankenhäuser oder Seniorenheime, stellen sehr hohe Anforderungen an die Mensch-Maschine-Schnittstelle.
Diese Erfordernisse gehen im Allgemeinen über die Möglichkeiten der Standardsensoren, wie Ultraschalloder Infrarotsensoren, hinaus. Es müssen daher ergänzende Verfahren zum Einsatz kommen.
Aus der Sicht der Mustererkennung sind die Nutzung des Rechnersehens und des natürlichsprachlichen Dialogs von besonderem Interesse. Dieser Beitrag stellt das mobile System MOBS Y vor. MOBS Y ist ein vollkommen integrierter autonomer mobiler Dienstleistungsroboter.
Er dient als ein automatischer dialogbasierter Empfangsservice für Besucher unseres Instituts.
MOBSY vereinigt vielfältige Methoden aus unterschiedlichsten Forschungsgebieten in einem eigenständigen System. Die zum Einsatz kommenden Methoden aus dem Bereich der Bildverarbeitung reichen dabei von Objektklassifikation über visuelle Selbstlokalisierung und Rekalibrierung bis hin zu multiokularer Objektverfolgung.
Die Dialogkomponente umfasst Methoden der Spracherkennung, des Sprachverstehens und die Generierung von Antworten. Im Beitrag werden die zu erfüllende Aufgabe und die einzelnen Verfahren dargestellt.
MOBSY is a fully integrated autonomous mobile service robot system.
It acts as an automatic dialogue based receptionist for visitors of our institute. MOBSY incorporates many techniques from different research areas into one working stand-alone system. Especially the computer vision and dialogue aspects are of main interest from the pattern recognition’s point of view.
To summarize shortly, the involved techniques range from object classification over visual self-localization and recalibration to object tracking with multiple cameras. A dialogue component has to deal with speech recognition, understanding and answer generation. Further techniques needed are navigation, obstacle avoidance, and mechanisms to provide fault tolerant behavior.
This contribution introduces our mobile system MOBSY. Among the main aspects vision and speech, we focus also on the integration aspect, both on the methodological and on the technical level. We describe the task and the involved techniques.
Finally, we discuss the experiences that we gained with MOBSY during a live performance at the 25th anniversary of our institute
Komponenten in J2EE-Patterns
(2002)
Dieser Artikel stellt ein Konzept f¨ur eine Online-Vorlesung im Rahmen der virtuellen Hochschule Bayern vor.
Zunächst werden kurz die fachlichen Inhalte erl¨autert, die es zu vermitteln gilt.
Nach dieser Einführung wird auf die Umsetzung der Inhalte in eine dem neuen Medium Internet angemessene Form eingegangen.
Dies betrifft vor allem die ausgewählten Techniken, die zur Wissensvermittlung eingesetzt werden.
Ein besonderer Schwerpunkt liegt auf der Möglichkeit durch Interaktivität das Wissen experimentell zu vertiefen und somit eine Verbindung zwischen dem theoretisch Gelernten und dem praktisch Erfahrenen herzustellen.
Weiterhin wird eine Vielfalt an Kommunikationsm¨oglichkeiten vorgestellt, da Kommunikation eine wichtige soziale Komponente im Lernprozess und h¨aufig entscheidend f¨ur den Erfolg ist.
We present an Approach for non linea roptimization of the parameters of an endoscopic camera mounted on a surgery robot. The goal is to generate a depth map for each image in order to enhance the quality of medical light fields.
The pose information provided by the robot is used as an initialization, where especially the orientation isi naccurate. Refinement of intrinsic and extrinsic camera parameters is performed by minimizing the back-projectionerror of 3-D points that are reconstructed by triangulation from image Feature stracked over an image sequence.
Optimization of the camera parameters results in an enhancement of Rendering Quality in two ways: More accurate parameters lead to better interpolation as well as to better depth maps for approximating the scenegeometry.
This work presents a technique for computing dense disparity maps from a binocular stereo camera system. The methods are applied in an Augmented Reality setting for combining real and virtual worlds with proper occlusions. The proposed stereo correspondence technique is based oil area matching and facilitates an efficient strategy by using the concept of a three-dimensional similarity accumulator whereby occlusions are detected and object boundaries are extracted correctly. The main contribution of this paper is the way we fill the accumulator using absolute differences of images and computing a mean filter on these difference images. This. is. where the main advantages of the accumulator approach can be exploited, since all entries can be computed in parallel and thus extremely efficient. Additionally, we-perform an asymmetric correction step and a post-processing of the disparity maps that maintains object edges.
In this paper we address the problem of using quaternions in unconstrained nonlinear optimization of 3-D rotations. Quaternions representing rotations have four elements but only three degrees of freedom, since they must be of norm one.
This constraint has to be taken into account when applying e.g. the Levenberg-Marquardt algorithm, a method for unconstrained nonlinear optimization widely used in computer vision. We propose an easy to use method for achieving this.
Experiments using our parametrization in photo grammetric bündle -adjustment are presented at the end of the paper.
Components and component-based technologies (componentware) are well-known and widely used in software development. There is a large amount of work and research in componentware. The number of available componentware approaches increases steadily and it is quite difficult to keep track of current trends in this area. In this paper, we survey the current state of the art in componentware, introduce and compare several well-known componentware approaches and classify them according to outstanding characteristics. We discuss a list of open issues in resarch and practical use of componentware and offer some proposals for further development. In our practical considerations we focus on embedded systems and business information systems because most of our partners in industry work in one of these two domains. We hope to start a broader discussion on componentware and to get a common understanding, which open issues are most important in research and industry (as a research agenda).
There are many nearest neighbor algorithms tailor made for ICP,but most of them require Special input data like range Images or triangle meshes.
We focus on efficient nearest neighbor algorithms that do not impose this limitation, and thus can also be used with 3-D point sets generated by structure-frommotion techniques. We shortly present the evaluated algorithms and introduce the modifications we made to improve their efficiency.
In particular, several enhancements to the well-known k-D tree algorithm are described. The first part of our Performance Analysis consists of Experiments on synthetic point sets, whereas the second part features experiments with the ICP algorithm on real point sets. Both parts are completed by a thorough evaluation of the obtained results.
This contribution introduces MOBSY, a fully integrated, autonomous mobile service robot system. It acts as an automatic dialogue-based receptionist for visitors to our institute.
MOBSY incorporates many techniques from different research areas into one working stand-alone system. The techniques involved range from computer vision over speech understanding to classical robotics.
Along with the two main aspects of vision and speech, we also focus on the integration aspect, both on the methodological and on the technical level.
We describe the task and the techniques involved. Finally, we discuss the experiences that we gained with MOBSY during a live performance at our institute.
This paper presents an approach for applying a dual quaternion hand–eye calibration algorithm on an endoscopic surgery robot. Special focus is on robustness, since the error of position and orientation data provided by the robot can be large depending on the movement actually executed.
Another inherent problem to all hand–eye calibration methods is that non–parallel rotation axes must be used; otherwise, the calibration will fail.
Thus we propose a method for increasing the numerical stability by selecting an optimal set of relative movements from the recorded sequence.
Experimental evaluation shows the error in the estimated transformation when using well–suited and ill–suited data. Additionally, we show how a RANSAC approach can be used for eliminating the erroneous robot data from the selected movements.
Robust registration of two 3-D point sets is a common problem in computer vision.
The iterative closest point (ICP) algorithm is undoubtedly the most popular algorithm for solving this kind of problem. In this paper, we present the Picky ICP algorithm, which has been created by merging several extensions of the standard ICP algorithm, thus improving its robustness and computation time.
Using pure 3-D point sets as input data, we do not consider additional information like point color or neighborhood relations. In addition to the standard ICP algorithm and the Picky ICP algorithm proposed in this paper, a robust algorithm due to Masuda and Yokoya and the RICP algorithm by Trucco et al. are evaluated.
We have experimentally determined the basin of convergence, robustness to noise and outliers, and computation time of these four ICP based algorithms
Creating long-lived software systems requires a technol ogy to build systems with good maintainability. One of the core ideas of the Model Driven Architecture (MDA) is to ease the change of the run-time platform by raising the level of abstraction in which just the business aspects are modelled, and by separating business aspects from techni cal issues and implementation details. This article anal yses the MDA approach with respect to maintainability. We argue that MDA systems will become even harder to maintain because the maintainability depends on the sys tem's (development) environment. MDA, UML and other base technologies are still under development, therefore the tools will change considerably. While the MDA possibly eases the change of the run-time platform, we show that it is quite difficult to exchange a link in the development tool chain. Our argumentation is based on the general proper ties of software evolution and the dependency chains in the development and run-time environments. It is backed by experiences with MDA development as well as by analo gies to general maintenance experiences.
Dieser Beitrag beschreibt, wie Referenzarchitekturen die MDA nutzbar machen. Die Referenzarchitekturen liefern dabei die konzeptionelle Unterstützung für die Konstruktion und Implementierung von Software und die MDA bietet den Rahmen für eine Werkzeugunterstützung. Die praktische Umsetzbarkeit wird mit dem OpenSource Framework AndroMDA und einer Referenzarchitektur der Firma iteratec GmbH gezeigt.
Java Server Faces
(2004)
The strong technical orientation of the previous multimedia evolution shows a lack of theoretical foundation. Both during the evolution and application of multimedia technology, well-founded theoretical concepts are missing. The intention of this paper is to show categories of different information representations and interaction types and their strengths in representing contents. A classification of multimedia information and interaction types is given also as an
overview of the problem fields of multimedia, especially in the field of learning theory. This classification is used to give some guidelines for using and combining multimedia contents in multimedia systems.
OBJECTIVES:
To generate a fast and robust 3-D visualization of the operation site during minimal invasive surgery.
METHODS:
Light fields are used to model and visualize the 3-D operation site during minimal invasive surgery. An endoscope positioning robot provides the position and orientation of the endoscope. The a priori un-known transformation from the endoscope plug to the endoscope tip (hand-eye transformation) can either be determined by a three-step algorithm, which includes measuring the endoscope length by hand or by using an automatic hand-eye calibration algorithm. Both methods are described in this paper and their respective computation times and accuracies are compared.
RESULTS:
Light fields were generated during real operations and in the laboratory. The comparison of the two methods to determine the unknown hand-eye transformation was done in the laboratory. The results which are being presented in this paper are: rendered images from the generated light fields, the calculated extrinsic camera parameters and their accuracies with respect to the applied hand-eye calibration method, and computation times.
CONCLUSION:
Using an endoscope positioning robot and knowing the hand-eye transformation, the fast and robust generation of light fields for minimal invasive surgery is possible.
Nowadays, in software development usually various models and description fragments are created. Some of these artifacts describe the core of the application, such as the data model or the user interaction model. Other artifacts describe cross-cutting concerns, such as security or the requirement: “every change of data has to be confirmed by the user, before it is written into the database”. During the development process these artifacts are combined, transformed, and finally implemented manually or even automatically. For instance a designer may combine a model of a dialog component specifying an action, that changes data, with a common description of a generic confirm dialog. The integrated dialog description may be afterwards implemented by a programmer. The manual combination of both artifacts and the transformation of the combined description into code are errorprone and hard to change. Industry basically tests or actually uses two approaches for the combination today: 1. At the level of code and execution Aspect-Oriented Programming (AOP) is used [KLM+97]. An aspect defines a cross-cutting concern and the weaving instructions. A code weaver provides the actual weaving at compile or runtime. 2. At the level of design models and analysis models Model-Driven Software Development (MDSD) is used [KWB03]. A model in MDSD is a first class development artifact. Thus models are significantly more abstract than the implementation of code. However, these models cannot be executed. Hence, a abstract model is transformed into another, typically more detailed. A series of such transformations results in executable code. Thereby, a model transformer or a code generator reads some of the artifacts. The other artifacts and the combination rules are implemented in transformation rules or code generation templates (or somewhere else in the transformation / generation approach).
Abstraction is the most basic principle of software engineering. Abstractions are provided by models. Modeling and model transformation constitute the core of model-driven development. Models can be refined and finally be transformed into a technical implementation, i.e., a software system.
The aim of this book is to give an overview of the state of the art in model-driven software development. Achievements are considered from a conceptual point of view in the first part, while the second part describes technical advances and infrastructures. Finally, the third part summarizes experiences gained in actual projects employing model-driven development.
Beydeda, Book and Gruhn put together the results from leading researchers in this area, both from industry and academia. The result is a collection of papers which gives both researchers and graduate students a comprehensive overview of current research issues and industrial forefront practice, as promoted by OMG’s MDA initiative.
Deploy and manage SQL Server 2005 with easeLearn to use all the powerful features available in SQL Server 2005 from this straightforward, hands-on guide.
Set up SQL Server 2005, automate system administration tasks, execute simple and complex database queries, and use the robust analysis, business intelligence, and reporting tools. Troubleshooting, data partitioning, replication, and query optimization are also covered.
With SQL Server 2005: A Beginner's Guide, you'll be able to set up a secure, reliable, and productive data management platform in no time.
Essential Skills for Database Professionals
- Install and customize SQL Server 2005
- Create, alter, and remove database objects with Transact-SQL statements
- Use SQL Server as a native XML database system - Tune your database system for optimal performance
- Use the new SQL Server Management Studio tool for executing and analyzing ad hoc queries
- Retrieve data from more than one source using join operations and SELECT statements
- Secure your database using two different authentication modes--Windows and mixed
- Restore databases using transaction logs and backup and recovery methods
- Streamline system administration tasks using the SQL Server Agent service tool
- Analyze and manage information stored in a data warehouse with Microsoft Analysis Services
The paper presents an extended hand-eye calibration approach that, in contrast to the standard method, does not require a calibration pattern for determining camera position and orientation. Instead, a structure-from-motion algorithm is applied for obtaining the eye-data that is necessary for computing the unknown hand-eye transformation.
Different ways of extending the standard algorithm are presented, which mainly involves the estimation of a scale factor in addition to rotation and translation. The proposed methods are experimentally compared using data obtained from an optical tracking system that determines the pose of an endoscopic camera.
The approach is of special interest in our clinical setup, as the usage of an unsterile calibration pattern is difficult in a sterile environment.
Wepresentaniterativeregistrationalgorithmfor aligning two differently scaled 3-D point sets. It extends the popular Iterative Closest Point (ICP) algorithm by estimating a scale factor between the two point sets in every iteration.
The presented algorithm is especially useful for the registration of point sets generated by structure-frommotion algorithms, which only reconstruct the 3-D structure of a scene upto scale. LiketheoriginalICPalgorithm,thepresentedalgorithm requires a rough pre-alignment of the point sets.
In order to determine the necessary accuracy of the pre-alignment, wehaveexperimentallyevaluatedthebasinofconvergence of the algorithm with respect to the initial rotation, translation, andscale factor between the two point sets.
The paper presents a new vectorquantization based Approach for selecting well-suited data for hand-eye calibration from a given sequence of hand and eye movements.
Data selection is essential if control of the movements used for calibration is not possible, especially when using continuously recorded data. The new algorithm is compared to another method for data selection as well as to the processing of subsequent movements.
Experimental results on real and synthetic data sets show the superior performance of the new approach with respect to calibration errors and computation time.
Real data has been obtained from an optical tracking system and a camera mounted on an endoscope, the goal being the reconstruction of medical lightfields.
Eines der ersten deutschsprachigen Bücher zum Thema! Marco Kuhrmann / Gerd Beneken Windows® Communication Foundation Konzepte - Programmierung – Migration (copy) Die Windows Communication Foundation ist als Kommunikationsframework Bestandteil von .NET 3.0. Sie stellt die Grundlage für Service-orientierte, Web Service-basierte, verteilte Anwendungen unter Microsoft Windows dar. Dieses Buch gibt Ihnen einen umfassenden Überblick über die neue Plattform. Es führt grundlegende Konzepte ein und stellt einen kompakten Leitfaden für die Softwareentwicklung auf Basis der Windows Communication Foundation dar. Anhand praktischer Beispiele werden Sie durch die Themen Services, Messaging usw. bis hin zu Fragen der Migration geführt.
Referenzarchitekturen
(2006)
Die Architektur eines Software-Systems ist im Wesentlichen die Beschreibung des Systems anhand einzelner Beziehungen, die zwischen diesen Bausteinen bestehen. Die Wahl einer bestimmten Architektur ist eine grundlegende Entscheidung im Entwicklungsprozess und hat großen Einfluss auf die Qualität des späteren Systems. In diesem Handbuch wird erstmalig ein fundierter Einstieg und Überblick über den Stand der Technik und zukunftsweisende Entwicklungen im Bereich der Software-Architekturen gegeben. Ausgehend von der Rolle des Software-Architekten werden die Konstruktion und Evolution sowie Migration von Software-Architekturen systematisch aufbereitet. Als Modellierungssprache wird überwiegend die Unified Modeling Language (UML) verwendet. Um ein umfassendes Verständnis für die Bedeutung von Architekturbeschreibungen zu erhalten, werden auch dieThemen Management, Bewertung und Wiederverwendung von Software-Architekturen behandelt. Ebenso wird auf neuere Konzepte wie Model-Driven Architecture (MDA), Software-Produktlinien, Reverse Engineering sowie Performance- und Sicherheitsaspekte eingegangen. Dabei werden die Konzepte beispielhaft illustriert. Im Anhang befinden sich ein Kapitel über formale Grundlagen der Architekturmodellierung, eine Übersicht über Architekturbeschreibungssprachen sowie ein Glossar. Das Buch ist ein Gemeinschaftswerk der Mitglieder des Arbeitskreises Software-Architektur der Gesellschaft für Informatik
This paper shows the experiences made with a multimedia based case study in academic education. The case study has been used within three courses of business process engineering. It was compared to a case study based solely on text. 13 assumptions have been evaluated. The main conclusion is that the multimedia based case study is much more practice oriented than a text based case study. Also the solutions of the students, which did the multimedia based case study, have been of higher quality. But on the other hand the expectations of the students to a multimedia based system are hard to meet. Based on these experiences some hints in further developing of multimedia based case studies are formulated.
The tracheoesophageal (TE) substitute voice is currently state–of–the–art treatment to restore the ability to speak after laryngectomy. The intelligibility while talking over a telephone is an important clinical factor, as it is a crucial part of the patients’ social life. An objective way to rate the intelligibility of substitute voices when talking over a telephone is desirable to improve the post–laryngectomy speech therapy. An automatic speech recognition (ASR) system was applied to 41 high quality recordings of post–laryngectomy patients. The ASR system was trained with normal, non–pathologic speech. It yielded a word accuracy (WA) of 36.9%±18.0%; compared to the intelligibility rating of a group of human experts the ASR system had a correlation coefficient of -.88. After downsampling the 41 recordings to telephone quality, the ASR system reached a WA of 26.4%±13.9% leading to a correlation coefficient of -.80. These results confirm that an ASR system can be used for objective intelligibility rating over the telephone.
Die tracheoösophageale Ersatzstimme: Automatische Verständlichkeitsbewertung über das Telefon
(2006)
Die tracheoösophageale Ersatzstimme TE ist heute "state of the art" der Stimmrehabilitation nach einer Laryngektomie. In dieser Studie, einem Teilprojekt eines von der Deutschen Krebshilfe geförderten Forschungsvorhabens, ging es um die objektive Bewertung des Behandlungsfortschritts. Untersucht wurden 41 Laryngektomierte mit einer TE (Provox-Stimmventilprothese) durchgeführt. Ziel der Studie war es, die Verständlichkeit im Gespräch und am Telefon objektiv zu beurteilen und zu vergleichen, um den Patienten in der Zukunft die telefonische Evaluation von zuhause aus zu ermöglichen. Zur Bewertung diente ein für Marktzwecke professionalisiertes automatisches Spracherkennungssystem. Es wurden zunächst Nahbesprechungsaufnahmen des "Nordwind und Sonne"-Textes von fünf Experten hinsichtlich ihrer Verständlichkeit beurteilt. Aus diesen Aufnahmen entstanden durch Abspielen über ein Telefon simulierte Telefonaufnahmen. Zielkriterium der automatischen Analyse war die Wortakkuratheit WA, die mit der an Schulnoten orientierten Stimmbewertung durch die Experten korreliert wurde. Die Studie ergab eine Korrelation von -0,82 für die Nahbesprechungs- und -0,69 für die Telefonaufnahmen. Die Ergebnisse zeigen, dass die automatische Verständlichkeitsbewertung von Ersatzstimmen auch per Telefon prinzipiell möglich ist. Möglichkeiten, die Qualitätsverluste durch die Telefonübertragung und die somit niedrigere Korrelation zu kompensieren, werden aufgezeigt.
The main focus of this work is the development of new methods for the self-calibration of a rigid stereo camera system. However, many of the algorithms introduced here have a wider impact, particularly in robot hand-eye calibration with all its different areas of application. Stereo self-calibration refers to the computation of the intrinsic and extrinsic parameters of a stereo rig using neither a priori knowledge on the movement of the rig nor on the geometry of the observed scene.
The stereo parameters obtained by self-calibration, namely rotation and translation from left to right camera, are used for computing depth maps for both images, which are applied for rendering correctly occluded virtual objects into a real scene (Augmented Reality).
The proposed methods were evaluated on real and synthetic data and compared to algorithms from the literature. In addition to a stereo rig, an optical tracking system with a camera mounted on an endoscope was calibrated without a calibration pattern using the proposed extended hand-eye calibration algorithm.
The self-calibration methods developed in this work have a number of features, which make them easily applicable in practice: They rely on temporal feature tracking only, as this monocular tracking in a continuous image sequence is much easier than left-to-right tracking when the camera parameters are still unknown.
Intrinsic and extrinsic camera parameters are computed during the self-calibration process, i.e., no calibration pattern is required. The proposed stereo self-calibration approach can also be used for extended hand-eye calibration, where the eye poses are obtained by structure-from-motion rather than from a calibration pattern.
An inherent problem to hand-eye calibration is that it requires at least two general movements of the cameras in order to compute the rigid transformation.
If the motion is not general enough, only a part of the parameters can be obtained, which would not be sufficient for computing depth maps. Therefore, a main part of this work discusses methods for data selection that increase the robustness of hand-eye calibration. Different new approaches are shown, the most successful ones being based on vector quantization.
The data selection algorithms developed in this work can not only be used for stereo self-calibration, but also for classic robot hand-eye calibration, and they are independent of the actually used hand-eye calibration algorithm.
We present an approach for indoor mapping and localization with a mobile robot using sparse range data, without the need for solving the SLAM problem.
The paper consists of two main parts. First, a split and merge based method for dividing a given metric map into distinct regions is presented, thus creating a topological map in a metric framework.
Spatial information extracted from this map is then used for self-localization. The robot computes local confidence maps for two simple localization strategies based on distance and relative orientation of regions.
The local confidence maps are then fused using an approach adapted from computer vision to produce overall confidence maps. Experiments on data acquired by mobile robots equipped with sonar sensors are presented.
We present a novel split and merge based method for dividing a given metric map into distinct regions, thus effectively creating a topological map on top of a metric one. The initial metric map is obtained from range data that are converted to a geometric map consisting of linear approximations of the indoor environment.
The splitting is done using an objective function that computes the quality of a region, based on criteria such as the average region width (to distinguish big rooms from corridors) and overall direction (which accounts for sharp bends).
A regularization term is used in order to avoid the formation of very small regions, which may originate from missing or unreliable sensor data. Experiments based on data acquired by a mobile robot equipped with sonar sensors are presented, which demonstrate the capabilities of the proposed method.
This paper presents a novel algorithm for computing absolute space representations (ASRs) in Yeap, W.K. and Jefferies, M. (1988) for mobile robots equipped with sonar sensors and an odometer. The robot is allowed to wander freely (i.e. without following any fixed path) along the corridors in an office environment from a given start point to an end point. It then wanders from the end point back to the start point. The resulting ASRs computed in both directions are shown
This paper shows how a mobile robot equipped with sonar sensors and an odometer is used to test ideas about cognitive mapping. The robot first explores an office environment and computes a "cognitive map" which is a network of ASRs [1]. The robot generates two networks, one for the outward journey and the other for the journey home.
It is shown that both networks are different. The two networks, however, are not merged to form a single network. Instead, the robot attempts to use distance information implicit in the shape of each ASR to find its way home. At random positions in the homeward journey, the robot calculates its orientation towards home. The robot's performances for both problems are evaluated and found to be surprisingly accurate.
For many aspects of speech therapy an objective evaluation of the intelligibility of a patient's speech is needed. We investigate the evaluation of the intelligibility of speech by means of automatic speech recognition. Previous studies have shown that measures like word accuracy are consistent with human experts' ratings. To ease the patient's burden, it is highly desirable to conduct the assessment via phone. However, the telephone channel influences the quality of the speech signal which negatively affects the results. To reduce inaccuracies, we propose a combination of two speech recognizers. Experiments on two sets of pathological speech show that the combination results in consistent improvements in the correlation between the automatic evaluation and the ratings by human experts. Furthermore, the approach leads to reductions of 10% and 25% of the maximum error of the intelligibility measure.
Tracheoesophageal (TE) speech is a possibility to restore the ability to speak after total laryngectomy, i.e. the removal of the larynx. The quality of the substitute voice has to be evaluated during therapy. For the intelligibility evaluation of German speakers over telephone, the Post-Laryngectomy Telephone Test (PLTT) was defined. Each patient reads out 20 of 400 different monosyllabic words and 5 out of 100 sentences. A human listener writes down the words and sentences understood and computes an overall score. This paper presents a means of objective and automatic evaluation that can replace the subjective method. The scores of 11 naïve raters for a set of 31 test speakers were compared to the word recognition rate of speech recognizers. Correlation values of about 0.9 were reached.
Previously we have shown that ASR technology can be used to objectively evaluate pathologic speech. Here we report on progress for routine clinical use: 1) We introduce an easy-to-use recording and evaluation environment. 2) We confirm our previous results for a larger group of patients. 3) We show that telephone speech can be analyzed with the same methods with only a small loss of agreement with human experts. 4) We show that prosodic information leads to more robust results. 5) We show that text reference instead of transliteration can be used for evaluation. Using word accuracy of a speech recognizer and prosodic features as features for SVM regression, we achieve a correlation of .90 between the automatic analysis and human experts.
In früheren Arbeiten wurde gezeigt, dass automatische Spracherkennungsverfahren verwendet werden können, um die Verständlichkeit von Sprechern mit tracheoösophagealer Ersatzstimme (TE-Stimme) automatisch zu bewerten [1,2]. In diesem Beitrag wird eine automatische Version des Postlaryngektomie-Telefontests (PLTT, [3]) vorgestellt, der einen eingeführten Standardtest für die Verständlichkeit über das Telefon darstellt.
C/C++ GE-PACKT
(2007)
When animals (including humans) first explore a new environment, what they remember is fragmentary knowledge about the places visited. Yet, they have to use such fragmentary knowledge to find their way home.
Humans naturally use more powerful heuristics while lower animals have shown to develop a variety of methods that tend to utilize two key pieces of information, namely distance and orientation information.
Their methods differ depending on how they sense their environment. Could a mobile robot be used to investigate the nature of such a process, commonly referred to in the psychological literature as cognitive mapping? What might be computed in the initial explorations and how is the resulting “cognitive map” be used for localization?
In this paper, we present an approach using a mobile robot to generate a “cognitive map”, the main focus being on experiments conducted in large spaces that the robot cannot apprehend at once due to the very limited range of its sensors. The robot computes a “cognitive map” and uses distance and orientation information for localization.
We present an approach for indoor mapping and localisation using sparse range data, acquired by a mobile robot equipped with sonar sensors.
The chapter consists of two main parts. First, a split and merge based method for dividing a given metric map into distinct regions is presented, thus creating a topological map in a metric framework. Spatial information extracted from this map is then used for self-localisation on the return home journey.
The robot computes local confidence maps for two simple localisation strategies based on distance and relative orientation of regions. These local maps are then fused to produce overall confidence maps.
When animals (including humans) first explore a new environment, what they remember is fragmentary knowledge about the places visited. Yet, they have to use such fragmentary knowledge to find their way home. Humans naturally use more powerful heuristics while lower animals have shown to developa varietyof methodsthat tend to utilize two key pieces of information,namely distance and orientation information.
Their methods differ depending on how they sense their environment.
Could a mobile robot be used to investigate the nature of such a process, commonly referred to in the psychological literature as cognitive mapping? What might be computed in the initial explorations and how is the resulting “cognitive map” be used to return home?
In this paper, we presented a novel approach using a mobile robot to do cognitive mapping. Our robot computes a “cognitive map” and uses distance and orientation information to find its way home.
The process developed provides interesting insights into the nature of cognitive mapping and encourages us to use a mobile robot to do cognitive mapping in the future, as opposed to its popular use in robot mapping.
Aktuelle Projekte werden nur noch selten von einem einzigen Team an einem einzigen Standort bearbeitet. Um verteilt arbeitende Teams, Auftrag- und Unterauftragnehmer zu koordinieren, ist einerseits ein abgestimmtes Vorgehen notwendig, andererseits müssen Projekte in solchen Szenarios durch Werkzeuge unterstützt werden.
Die problemorientierte und organisationsspezifische Integration von Werkzeugen und Vorgehensmodellen ist ein wesentliches Erfolgskriterium für Organisationen und ihre Projekte. Werkzeuge können Anwender unterstützen und durch vorgegebene Prozesse wie z.B. Risikomanagement oder Berichtswesen führen. Eine nahtlose Integration eines Vorgehensmodells in eine gegebene und akzeptierte Werkzeugumgebung kann sich positiv auf den Erfolg von Prozessanpassungen auswirken.
Effizienzsteigerungen, bspw. durch die automatische Generierung von Vorlagen oder Dokumentation, machen Werkzeugintegrierte Prozesse auch für Entwickler interessant. Auf der anderen Seite müssen Projektspezifische Anforderungen an Werkzeuge und Prozesse berücksichtigt werden, die eine Flexibilität in der Anwendung und Benutzung erforderlich machen.
Dieser 3. Workshop diskutiert in Fortsetzung der Workshops Formalisierung und Anwendung sowie Reife und Qualität unterschiedliche und facettenreiche Fragen, die sich hinsichtlich der Werkzeugunterstützung und der werkzeuggeführten Anwendung von Vorgehensmodellen ergeben. Insbesondere die semi- und vollautomatische Integration von Vorgehensmodellen, die Generierung von Arbeitsumgebungen und Optionen zur Anwenderunterstützung und Anwenderführung spielen hier zentrale Rollen.
Im Zentrum des Workshops steht der Lifecycle von Vorgehensmodellen mit dem Schwerpunkt der Operationalisierung durch Werkzeuge und der geführten/unterstützten Anwendung. Von besonderem Interesse sind Fragen nach der Konzeption und Durchführung von Werkzeugen und Anwenderunterstützung im Kontext verschiedener Vorgehensmodelle, wie bspw. dem V-Modell XT, Scrum, Prince2, RUP, XP.
Themenüberblick:
Der 1-täige Workshop zielt im Wesentlichen auf folgende Themen:
Verbesserung der Akzeptanz von Vorgehensmodellen
Werkzeugunterstützte Prozesseinführung:
Einführung und Einführbarkeit mithilfe von Werkzeugen
Etablierung und Durchsetzung, Do's und Dont's
Kontinuierliche werkzeugunterstützte Prozessverbesserung
Kosten und Nutzen, Abhängigkeiten, Vor- und Nachteile
Werkzeugunterstützung bei der Projektdurchführung
Lose gekoppelte Werkzeuglandschaften vs. integrierte Tools
Werkzeugunterstützte Planung, Steuerung von Agilen Projekten
Unterstützung für kleine, große und verteilte Teams
Werkzeugunterstützung bei Auswahl, Anpassung und Tailoring
Grenzen der Werkzeugunterstützung
Formalisierung und Werkzeugunterstützung von Prozessen und Prozessschritten
Vorgehens- und Reifegradmodelle: V-Model XT, XP, CMMI, RUP, Scrum etc.
Software-Architektur und Projektmanagement sind Mittel, um große Software-Entwicklungsprojekte beherrschbar zu machen. Die Verbindung zwischen beiden Disziplinen wird in dieser Arbeit mithilfe einer Architekturtheorie hergestellt: Ein Verfahren wird vorgeschlagen, mit dem die Beschreibung der logischen Architektur und die Projektplanung iterativ abgeglichen und verbessert werden können. Verfahren zur architekturbasierten Optimierung der Planung werden daraus entwickelt. Die Dokumentation von Architekturen mithilfe von Architektursichten ist der zweite Schwerpunkt. Mathematisch fundierte Verfahren zur Erzeugung von Sichten werden mithilfe einer Architekturtheorie definiert. Die Verfahren erzeugen etwa Architektursichten, die Projektmanagement und die Kommunikation innerhalb des Projektes mithilfe von Planungsinformationen unterstützen. Ein prototypisches Werkzeug demonstriert Anwendbarkeit der Theorie und der vorgeschlagenen Verfahren
Get Started on Microsoft SQL Server 2008 in No TimeLearn to use all of the powerful features available in SQL Server 2008 quickly and easily.
Microsoft SQL Server 2008: A Beginner's Guide explains the fundamentals of each topic alongside examples and tutorials that walk you through real-world database tasks.
Install SQL Server 2008, construct high-performance databases, use powerful Transact-SQL statements, create stored procedures and triggers, and execute simple and complex database queries. Performance tuning, Database Engine security, Business Intelligence, and XML are also covered.
Set up, configure, and maintain SQL Server 2008
Build and manage database objects using Transact-SQL statements Create stored procedures and user-defined functionsOptimize database performance, availability, and reliability
Implement solid security using authentication, encryption, and authorization Automate tasks using SQL Server Agent
Create reliable data backups and perform flawless system restores
Use all-new SQL Server 2008 Business Intelligence, development, and administration toolsLearn in detail the SQL Server XML technology (SQLXML)
The present paper examines which benefit an automated documentation of the IT infrastructure can have for the configuration management process of ITIL, and whether it is possible to fully automate documentation. The result is the conclusion that the documentation process can be fully automated. It follows from this analysis that the automated documentation can “only” supply information for the ITIL configuration management, respectively for the CMDB.
Rooted in multi-document summarization, maximum marginal relevance (MMR) is a widely used algorithm for meeting summarization (MS). A major problem in extractive MS using MMR is finding a proper query: the centroid based query which is commonly used in the absence of a manually specified query, can not significantly outperform a simple baseline system. We introduce a simple yet robust algorithm to automatically extract keyphrases (KP) from a meeting which can then be used as a query in the MMR algorithm. We show that the KP based system significantly outperforms both baseline and centroid based systems. As human refined KPs show even better summarization performance, we outline how to integrate the KP approach into a graphical user interface allowing interactive summarization to match the user's needs in terms of summary length and topic focus.
The CALO meeting assistant provides for distributed meeting capture, annotation, automatic transcription and semantic analysis of multiparty meetings, and is part of the larger CALO personal assistant system. This paper summarizes the CALO-MA architecture and its speech recognition and understanding components, which include real-time and offline speech transcription, dialog act segmentation and tagging, question-answer pair identification, action item recognition, decision extraction, and summarization.
Despite considerable work in automatic meeting summarization over the last few years, comparing results remains difficult due to varied task conditions and evaluations. To address this issue, we present a method for determining the best possible extractive summary given an evaluation metric like ROUGE. Our oracle system is based on a knapsack-packing framework, and though NP-Hard, can be solved nearly optimally by a genetic algorithm. To frame new research results in a meaningful context, we suggest presenting our oracle results alongside two simple baselines. We show oracle and baseline results for a variety of evaluation scenarios that have recently appeared in this field.
Tracheoesophageal voice is state-of-the-art in voice rehabilitation after laryngectomy. Intelligibility on a telephone is an important evaluation criterion as it is a crucial part of social life. An objective measure of intelligibility when talking on a telephone is desirable in the field of postlaryngectomy speech therapy and its evaluation.
Based upon successful earlier studies with broadband speech, an automatic speech recognition (ASR) system was applied to 41 recordings of postlaryngectomy patients. Recordings were available in different signal qualities; quality was the crucial criterion for this study.
Compared to the intelligibility rating of 5 human experts, the ASR system had a correlation coefficient of r = -0.87 and Krippendorff's alpha of 0.65 when broadband speech was processed. The rater group alone achieved alpha = 0.66. With the test recordings in telephone quality, the system reached r = -0.79 and alpha = 0.67.
For medical purposes, a comprehensive diagnostic approach to (substitute) voice has to cover both subjective and objective tests. An automatic recognition system such as the one proposed in this study can be used for objective intelligibility rating with results comparable to those of human experts. This holds for broadband speech as well as for automatic evaluation via telephone.
This paper presents new vector quantization based methods for selecting well-suited data for hand-eye calibration from a given sequence of hand and eye movements.
Data selection can improve the accuracy of classic hand-eye calibration, and make it possible in the first place in situations where the standard approach of manually selecting positions is inconvenient or even impossible, especially when using continuously recorded data.
A variety of methods is proposed, which differ from each other in the dimensionality of the vector quantization compared to the degrees of freedom of the rotation representation, and how the rotation angle is incorporated.
The performance of the proposed vector quantization based data selection methods is evaluated using data obtained from a manually moved optical tracking system (hand) and an endoscopic camera (eye).
This paper describes using a mobile robot, equipped with some sonar sensors and an odometer, to test navigation through the use of a cognitive map. The robot explores an office environment, computes a cognitive map, which is a network of ASRs [36, 35], and attempts to find its way home.
Ten trials were conducted and the robot found its way home each time. From four random positions in two trials, the robot estimated the home position relative to its current position reasonably accurately.
Our robot does not solve the simultaneous localization and mapping problem and the map computed is fuzzy and inaccurate with much of the details missing.
In each homeward journey, it computes a new cognitive map of the same part of the environment, as seen from the perspective of the homeward journey. We show how the robot uses distance information from both maps to find its way home.
Towards a Language-independent Intelligibility Assessment of Children with Cleft Lip and Palate
(2009)
We describe a novel evaluation system for the intelligibility assessment of children with CLP on standardized tests. The system is solely based on standard cepstral features in form of MFCCs. No other information like word alignments is used. So the system can be easily adapted to other languages. For each child one GMM is created by adaptation of a UBM to the speaker-specific MFCCs. The components of this GMM are concatenated in order to create a so-called GMM supervector. These GMM supervectors are then used as meta features for an SVR. We evaluated our language-independent system on two different datasets of children suffering from CLP. One dataset contains recordings of 35 German children, where the children named different pictograms. The other dataset contains recordings of 14 Italian speaking children, who repeated standardized sentences. On both datasets we achieved high correlations: up to 0.81 for the German dataset and 0.83 for the Italian dataset.
This paper presents an unsupervised, graph based approach for extractive summarization of meetings. Graph based methods such as TextRank have been used for sentence extraction from news articles. These methods model text as a graph with sentences as nodes and edges based on word overlap. A sentence node is then ranked according to its similarity with other nodes. The spontaneous speech in meetings leads to incomplete, informed sentences with high redundancy and calls for additional measures to extract relevant sentences. We propose an extension of the TextRank algorithm that clusters the meeting utterances and uses these clusters to construct the graph. We evaluate this method on the AM I meeting corpus and show a significant improvement over TextRank and other baseline methods.
We introduce a model for extractive meeting summarization based on the hypothesis that utterances convey bits of information, or concepts. Using keyphrases as concepts weighted by frequency, and an integer linear program to determine the best set of utterances, that is, covering as many concepts as possible while satisfying a length constraint, we achieve ROUGE scores at least as good as a ROUGE-based oracle derived from human summaries. This brings us to a critical discussion of ROUGE and the future of extractive meeting summarization.
In dieser Studie wird ein objektives Verfahren für die Verständlichkeitsmessung mit dem Postlaryngektomie-Telefontest (PLTT) mittels automatischer Spracherkennungstechnik beschrieben.
31 Sprecher mit tracheoösophagealer Ersatzstimme (25 Männer und 6 Frauen; 63,4±8,7 Jahre) wurden zunächst von 11 naiven Hörern bewertet. Der vom Spracherkennungssystem ermittelte Verständlichkeitsgrad wird als Prozentsatz korrekt verstandener Wörter einer Wortkette, der Wortakkuratheit bzw. -korrektheit, angegeben und mit den subjektiv ermittelten PLTT-Werten verglichen.
Die durchschnittliche PLTT-Gesamtverständlichkeit der 11 naiven Hörer liegt bei 47%, die automatisch ermittelte Wortakkuratheit und Wortkorrektheit liegen deutlich niedriger (etwa 0% bzw. etwa 15%). Die Korrelation zwischen menschlicher und maschineller Bewertung liegt jedoch z. T. über 0,9.
Für den Gesamtverständlichkeitswert des PLTT kann mit Hilfe der automatischen Spracherkennung objektiv und effizient ein äquivalentes Maß berechnet werden.
In this paper, we present our experience in designing and teaching of our first robotics course for students at primary school level.
The course was carried out over a comparatively short period of time, namely 6 weeks, 2 hours per week. In contrast to many other projects, we use robots that researchers used to conduct their research and discuss problems faced by these researchers. Thus, this is not a behavioural study but a hands-on learning experience for the students.
The aim is to highlight the development of autonomous robots and artificial intelligence as well as to promote science and robotics in schools.
The number of embedded systems in our daily lives that are distributed, hidden, and ubiquitous continues to increase. Many of them are safety-critical. To provide additional or better functionalities, they are becoming more and more complex, which makes it difficult to guarantee safety. It is undisputed that safety must be considered before the start of development, continue until decommissioning, and is particularly important during the design of the system and software architecture. An architecture must be able to avoid, detect, or mitigate all dangerous failures to a sufficient degree. For this purpose, the architectural design must be guided and verified by safety analyses. However, state-of-the-art component-oriented or model-based architectural design approaches use different levels of abstraction to handle complexity. So, safety analyses must also be applied on different levels of abstraction, and it must be checked and guaranteed that they are consistent with each other, which is not supported by standard safety analyses. In this paper, we present a consistency check for CFTs that automatically detects commonalities and inconsistencies between fault trees of different levels of abstraction. This facilitates the application of safety analyses in top-down architectural designs and reduces effort.
Online Identification of Learner Problem Solving Strategies Using Pattern Recognition Methods
(2010)
Learning and programming environments used in computer science education give feedback to the users by system messages. These are triggered by programming errors and give only "technical" hints without regard to the learners' problem solving process. To adapt the messages not only to the factual but also to the procedural knowledge of the learners, their problem solving strategies have to be identified automatically and in process. This article describes a way to achieve this with the help of pattern recognition methods. Using data from a study with 65 learners aged 12 to 13 using a learning environment for programming, a classification system based on hidden Markov models is trained and integrated in the very same environment. We discuss findings in that data and the performance of the automatic online identification, and present first results using the developed software in class.
Reverberation effects as observed by room microphones severely degrade the performance of automatic speech recognition systems. We investigate the use of dereverberation by spectral subtraction as proposed by Lebart and Boucher and introduce a simple approach to estimate the required decay parameter by clapping hands. Experiments on small vocabulary continuous speech recognition task on read speech show that using the calibrated dereverberation improves WER from 73.2 to 54.7 for the best microphone. In combination with system adaptation, the WER could be reduced to 28.2, which is only a 16% relative loss of performance comparison to using a headset instead of a room microphone.
The CALO Meeting Assistant (MA) provides for distributed meeting capture, annotation, automatic transcription and semantic analysis of multiparty meetings, and is part of the larger CALO personal assistant system. This paper presents the CALO-MA architecture and its speech recognition and understanding components, which include real-time and offline speech transcription, dialog act segmentation and tagging, topic identification and segmentation, question-answer pair identification, action item recognition, decision extraction, and summarization.
Data fusion plays a central role in more and more automotive applications, especially for driver assistance systems. On the one hand the process of data fusion combines data and information to estimate or predict states of observed objects.
On the other hand data fusion introduces abstraction layers for data description and allows building more flexible and modular systems.The data fusion process can be divided into a low-level processing (tracking and object discrimination) and a high level processing (situation assessment).
High level processing becomes more and more the focus of current research as different assistance applications will be combined into one comprehensive assistance system.
Different levels/strategies for data fusion can be distinguished: Fusion on raw data level, fusion on feature level and fusion on decision level.
All fusion strategies can be found in current driver assistance implementations.
The paper gives an overview of the different fusion strategies and shows their application in current driver assistance systems. For low level processing a raw data fusion approach in a stereo video system is described, as an example for feature level fusion the fusion of radar and camera data for tracking is explained.
As an example for a high level fusion algorithm an approach for a situation assessment based on multiple sensors is given. The paper describes practical realizations of these examples and points out their potential to further increase traffic safety with reasonably low cost for the overall system.
Efficient safety analyses of complex software intensive embedded systems are still a challenging task. This article illustrates how model-driven development principles can be used in safety engineering to reduce cost and effort. To this end, the article shows how well accepted safety engineering approaches can be shifted to the level of model-driven development by integrating safety models into functional development models. Namely, we illustrate how UML profiles, model transformations, and techniques for multi language development can be used to seamlessly integrate component fault trees into the UML.
Embedded real-time systems are growing in complexity, which goes far beyond simplistic closedloop functionality. Current approaches of worst-case execution time (WCET) analysis are used to verify deadlines of such systems, especially when they are safety critical. These approaches calculate or measure WCET as a single value that is expected as an upper bound for a system's execution time. Overestimations are taken into account to make this upper bound a safe bound, but modern processor architectures with caches, multi-threading, and instruction pipelines often expand those overestimations for safe upper bounds into unrealistic areas. Some approaches try to overcome this problem by calculating multiple upper bounds and argue that each single upper bound will hold for a certain probability (probabilistic worst-case execution time). Even though some of them tackle the problem of obtaining reliable probabilistic values for such upper bounds, more effort is required. Therefore, we present in this paper how probabilities of safety analysis models can be combined with elements of system development models to calculate a probabilistic worst-case execution time. This approach can be applied to systems that use mechanisms belonging to the area of fault tolerance, since such mechanisms are usually quantified in safety analyses to certify the system as being highly reliable or safe.
The growing complexity of safety-critical embedded systems is leading to an increased complexity of safety analysis models. Often used fault tolerance mechanisms have complex failure behavior and produce overhead compared to systems without such mechanisms. The question arises whether the overhead for fault tolerance is acceptable for the increased safety of a system. Manually modeling the timing behavior is cost intensive and error prone. Current approaches of safety analysis and execution time analysis are not able to reflect the timing behavior of complex mechanisms according to failures. In this paper, we describe an approach that combines safety analysis models with execution times to extract different execution times for different failure conditions. This provides a detailed view on the safety behavior in combination with the produced overhead and allows to find and certify appropriate fault tolerance mechanisms.
We present a novel lecture browser that utilizes ranked key phrases displayed on a stream graph to overcome the shortcomings of traditional extractive (query-based) summaries. The system extracts key phrases from the ASR transcripts, performs an unsupervised ranking, and displays an initial number of phrases on the stream graph. This graph gives an intuition of when which key phrase is spoken, and how dominant it is throughout the lecture. The user can select the phrases to be displayed and furthermore adjust the ranking of the all phrases. All user interactions are logged to a server to improve the ranking algorithms and provide user specific rankings.
A growing number of universities offer recordings of lectures, seminars and talks in an online e-learning portal. However, the user is often not interested in the entire recording, but is looking for parts covering a certain topic. Usually, the user has to either watch the whole video or “zap” through the lecture and risk missing important details. We present an integrated web-based platform to help users find relevant sections within recorded lecture videos by providing them with a ranked list of key phrases. For a user-defined subset of these, a StreamGraph visualizes when important key phrases occur and how prominent they are at the given time. To come up with the best key phrase rankings, we evaluate three different key phrase ranking methods using lectures of different topics by comparing automatic with human rankings, and show that human and automatic rankings yield similar scores using Normalized Discounted Cumulative Gain (NDCG).
In this paper, we describe a new Java framework for an easy and efficient way of developing new GUI based speech processing applications. Standard components are provided to display the speech signal, the power plot, and the spectrogram. Furthermore, a component to create a new transcription and to display and manipulate an existing transcription is provided, as well as a component to display and manually correct external pitch values. These Swing components can be easily embedded into own Java programs. They can be synchronized to display the same region of the speech file. The object-oriented design provides base classes for rapid development of own components.
This paper focuses on the automatic detection of a person's blood level alcohol based on automatic speech processing approaches. We compare 5 different feature types with different ways of modeling. Experiments are based on the ALC corpus of IS2011 Speaker State Challenge. The classification task is restricted to the detection of a blood alcohol level above 0.5‰. Three feature sets are based on spectral observations: MFCCs, PLPs, TRAPS. These are modeled by GMMs. Classification is either done by a Gaussian classifier or by SVMs. In the later case classification is based on GMM-based supervectors, i.e. concatenation of GMM mean vectors. A prosodic system extracts a 292-dimensional feature vector based on a voiced-unvoiced decision. A transcription-based system makes use of text transcriptions related to phoneme durations and textual structure. We compare the stand-alone performances of these systems and combine them on score level by logistic regression. The best stand-alone performance is the transcriptionbased system which outperforms the baseline by 4.8% on the development set. A Combination on score level gave a huge boost when the spectral-based systems were added (73.6%). This is a relative improvement of 12.7% to the baseline. On the test-set we achieved an UA of 68.6% which is a significant improvement of 4.1% to the baseline system.
In this work we focus on speaker verification on channels of varying quality, namely Skype and high frequency (HF) radio. In our setup, we assume to have telephone recordings of speakers for training, but recordings of different channels for testing with varying (lower) signal quality. Starting from a Gaussian mixture / support vector machine (GMM/SVM) baseline, we evaluate multi-condition training (MCT), an ideal channel classification approach (ICC), and nuisance attribute projection (NAP) to compensate for the loss of information due to the transmission. In an evaluation on Switchboard-2 data using Skype and HF channel simulators, we show that, for good signal quality, NAP improves the baseline system performance from 5% EER to 3.33% EER (for both Skype and HF). For strongly distorted data, MCT or, if adequate, ICC turn out to be the method of choice.
Get Started on Microsoft SQL Server 2012 in No TimeLearn to use all of the powerful features available in SQL Server 2012 quickly and easily.
Microsoft SQL Server 2012: A Beginner's Guide explains the fundamentals of each topic alongside examples and tutorials that walk you through real-world database tasks.
Install SQL Server 2012, construct high-performance databases, use powerful Transact-SQL statements, create stored procedures and triggers, and execute simple and complex database queries.
Performance tuning, Database Engine security, Business Intelligence, and XML are also covered.
Set up, configure, and maintain SQL Server 20012 Build and manage database objects using Transact-SQL statements Create stored procedures and user-defined functionsOptimize database performance, availability, and reliabilityImplement solid security using authentication, encryption, and authorization
Automate tasks using SQL Server AgentCreate reliable data backups and perform flawless system restoresUse all-new SQL Server 2012 Business Intelligence, development, and administration toolsLearn in detail the SQL Server XML technology (SQLXML)
Skriptum Geschäftsprozesse
(2012)
Embedded real-time systems are growing in complexity, which goes far beyond simplistic closed-loop functionality. Current approaches for worst-case execution time (WCET) analysis are used to verify the deadlines of such systems. These approaches calculate or measure the WCET as a single value that is expected as an upper bound for a system’s execution time. Overestimations are taken into account to make this upper bound a safe bound, but modern processor architectures expand those overestimations into unrealistic areas. Therefore, we present in this paper how of safety analysis model probabilities can be combined with elements of system development models to calculate a probabilistic WCET. This approach can be applied to systems that use mechanisms belonging to the area of fault tolerance, since such mechanisms are usually quantified using safety analyses to certify the system as being highly reliable or safe. A tool prototype implementing this approach is also presented which provides reliable safe upper bounds by performing a static WCET analysis and which overcomes the frequently encountered problem of dependence structures by using a fault injection approach.
A growing number of universities and other educational institutions provide recordings of lectures and seminars as an additional resource to the students. In contrast to educational films that are scripted, directed and often shot by film professionals, these plain recordings are typically not post-processed in an editorial sense. Thus, the videos often contain longer periods of inactivity or silence, unnecessary repetitions, or corrections of prior mistakes. This paper describes the FAU Video Lecture Browser system, a web-based platform for the interactive assessment of video lectures, that helps to close the gap between a plain recording and a useful e-learning resource by displaying automatically extracted and ranked key phrases on an augmented time line based on stream graphs. In a pilot study, users of the interface were able to complete a topic localization task about 29 % faster than users provided with the video only while achieving about the same accuracy. The user interactions can be logged on the server to collect data to evaluate the quality of the phrases and rankings, and to train systems that produce customized phrase rankings.
In earlier studies, we assessed the degree of non-nativeness employing prosodic information. In this paper, we combine prosodic information with (1) features derived from a Gaussian Mixture Model used as Universal Background Model (GMM-UBM), a powerful approach used in speaker identification, and (2) openSMILE, a standard open-source toolkit for extracting acoustic features. We evaluate our approach with English speech from 94 non-native speakers. GMM-UBM or openSMILE modelling alone yields lower performance than our prosodic feature vector; however, adding information from the GMM-UBM modelling or openSMILE by late fusion improves results.
Voice scrambling is widely used to add privacy to the radio communication of various authorities - but is also used by criminals to evade prosecution. In this article, we consider various analog voice scrambling techniques such as fixed frequency inversion, splitband inversion and rolling code scramblers. We explain how to break them using automatically extracted measures and scoring algorithms, and evaluate the proposed system using simulated data. While the simple inversion can be easily broken, the more advanced techniques require additional work prior to unsupervised automatization; the presented user interface allows the user to refine the automatic results to obtain a high quality solution.
In the past decade, semi-continuous hidden Markov models (SCHMMs) have not attracted much attention in the speech recognition community. Growing amounts of training data and increasing sophistication of model estimation led to the impression that continuous HMMs are the best choice of acoustic model. However, recent work on recognition of under-resourced languages faces the same old problem of estimating a large number of parameters from limited amounts of transcribed speech. This has led to a renewed interest in methods of reducing the number of parameters while maintaining or extending the modeling capabilities of continuous models. In this work, we compare classic and multiple-codebook semi-continuous models using diagonal and full covariance matrices with continuous HMMs and subspace Gaussian mixture models. Experiments on the RM and WSJ corpora show that while a classical semicontinuous system does not perform as well as a continuous one, multiple-codebook semi-continuous systems can perform better, particular when using full-covariance Gaussians.
We describe a lattice generation method that is exact, i.e. it satisfies all the natural properties we would want from a lattice of alternative transcriptions of an utterance. This method does not introduce substantial overhead above one-best decoding. Our method is most directly applicable when using WFST decoders where the WFST is “fully expanded”, i.e. where the arcs correspond to HMM transitions. It outputs lattices that include HMM-state-level alignments as well as word labels. The general idea is to create a state-level lattice during decoding, and to do a special form of determinization that retains only the best-scoring path for each word sequence. This special determinization algorithm is a solution to the following problem: Given a WFST A, compute a WFST B that, for each input-symbol-sequence of A, contains just the lowest-cost path through A.
Folks that have been here last winter prior to ASRU might be familiar with the title of that talk. But don't be misled, I'll have something new for you. In this talk, I will give an overview over the FAU Lecture Browser which I developed in the context of my thesis. I will start out with the description of a novel data set: The LME Lectures are a corpus of two series of graduate level computer science lectures with 18 recordings each. The courses cover topics in medical image processing and pattern analysis/machine learning. The roughly 40 hours of speech were manually transcribed, and one particular lecture was annotated with key phrases by five human raters. Using this data set, I trained three different speech recognizers using regular continuous, multi-codebook semi-continuous and subspace Gaussian mixture models, that show an error rate of about 10% WER. I will then briefly describe the key phrase extraction and automatic ranking, which was then compared against five raters on one lecture recording. Finally, I will talk about a little usability study where 10 students were asked to perform a certain task-- with and without the proposed lecture browser. Although the number of contestants is limited, the numbers are interesting: the users that had the interface could complete the tasks about 30% faster than the control group, while maintaining about the same accuracy.
One aspect of voice and speech evaluation after laryngeal cancer is acoustic analysis. Perceptual evaluation by expert raters is a standard in the clinical environment for global criteria such as overall quality or intelligibility. So far, automatic approaches evaluate acoustic properties of pathologic voices based on voiced/unvoiced distinction and fundamental frequency analysis of sustained vowels. Because of the high amount of noisy components and the increasing aperiodicity of highly pathologic voices, a fully automatic analysis of fundamental frequency is difficult. We introduce a purely data-driven system for the acoustic analysis of pathologic voices based on recordings of a standard text.
Das Lehrbuch Software Requirements von Ulrike Hammerschall und Gerd Beneken führt in die Grundkonzepte des Requirements Engineering ein und zeigt anhand vieler anschaulicher Beispiele, wie man systematisch und methodisch bei der Ermittlung, Dokumentation, Spezifikation, Modellierung, Validierung und Verwaltung von Software Requirements vorgeht. Mit seinem Inhalt und didaktisch wertvollem Aufbau richtet sich das Buch an Studierende der Fachrichtung Informatik und Wirtschaftsinformatik, sowie aller verwandten Fachrichtungen, die sich mit den Themen Software Engineering oder Requirements Engineering beschäftigen.
Software Requirements sind die Anforderungen der Anwender an die Funktionalität eines geplanten Software-Systems. Requirements Engineering ist der Prozess zur methodischen Erhebung und Beschreibung der Anforderungen. Die Kunst eines guten Requirements Engineerings ist die Entwicklung einer stabilen Anforderungsbasis als zuverlässige Grundlage für die weitere Entwicklung der Software.
Das vorliegende Buch führt in die Grundkonzepte des Requirements Engineering ein und zeigt anhand vieler Beispiele, wie man systematisch und methodisch bei der Ermittlung, Dokumentation, Spezifikation, Modellierung, Validierung und Verwaltung von Software Requirements vorgeht. Ausführliche Methodenbeschreibungen dienen zur Erläuterung und ein durchgängiges Fallbeispiel hilft dem Leser die Anwendung der Methoden nachzuvollziehen. Mit Hilfe der Übungen am Ende jedes Kapitels, können die Methoden selbst eingeübt werden.
Neben dem klassischen Dokument-getriebenen Requirements Engineering beschäftigt sich das Buch mit den Methoden des agilen Requirements Engineering und vergleicht die beiden Ansätze. Zusätzlich bietet das Buch einen Blick über den Tellerrand und betrachtet die Schnittstellen des Requirements Engineerings zu anderen Teilprozessen im Entwicklungsprozess.
Das Buch richtet sich an Studierende der Fachrichtung Informatik und Wirtschaftsinformatik, sowie aller verwandten Fachrichtungen, die sich mit den Themen Software Engineering oder Requirements Engineering beschäftigen.
- Der RE-Prozess, Vorgehen und Methodik.
- Anforderungsermittlung, -dokumentation und -spezifikation.
- Querschnittliche Aufgaben wie Validierung, Modellierung und Management von Anforderungen
- Agiles RE, Vorgehen und Methodik.
- Schnittstellen zu benachbarten Teilprozessen (Projektmanagement, Qualitätsmanagement, Software-Architektur) sowie zum Usability Engineering.
- Einführung und Verbesserung des Requirements Engineering Prozesses in einer Organisation.
Agenda:
Studienziel Software-Ingenieur(in) Projekte im Rahmen des Informatik-Studiums Projektbeispiele -IT Partner in Forschungsprojekten
Labor für Software-Technik
-IT Partner in Forschungsprojekten -Einzelanfertigungen für genau einen Kunden -Unterstützung von Startups / Testen von Geschäftsideen -Projekte mit kleinen und mittleren Unternehmen Zusammenarbeit mit FH: Nächste Schritte
(Background) Empirical Software Engineering (SE) strives to provide empirical evidence about the pros and cons of SE approaches. This kind of knowledge becomes relevant when the issue is whether to change from a currently employed approach to a new one or not. An informed decision is required and is particularly important in the development of safety-critical systems. For example, for the safety analysis of safety-critical embedded systems, methods such as Failure Mode and Effect Analysis (FMEA) and Fault Tree Analysis (FTA) are used. With the advent of model-based systems and software development, the question arises whether safety engineering methods should also be adopted. New technologies such as Component Integrated Fault Trees (CFT) come into play. Industry demands to know the benefits of these new methods over established ones such as Fault Trees (FT). (Methods) For the purpose of comparing CFT and FT with regard to the capabilities of the safety analysis methods (such as quality of the results) and to the participants' rating of the consistency, clarity, and maintainability of the methods, we designed a comparative study as a controlled experiment using a within-subject design. The experiment was run with seven academic staff members working towards their PhD. The study was replicated with eleven domain experts from industry. (Results) Although the analysis of the tasks' solutions showed that the use of CFT did not yield a significantly different number of correct or incorrect solutions, the participants rated the modeling capacities of CFT higher in terms of model consistency, clarity, and maintainability. (Conclusion) From this first evidence, we conclude that CFT have the potential of being beneficial for companies looking for a safety analysis approachfor projects using model-based development.
In safety analysis for safety-critical embedded systems, methods such as FMEA and fault trees (FT) are strongly established in practice. However, the current shift towards model-based development has resulted in various new safety analysis methods, such as Component Integrated Fault Trees (CFT). Industry demands to know the benefits of these new methods. To compare CFT to FT, we conducted a controlled experiment in which 18 participants from industry and academia had to apply each method to safety modeling tasks from the avionics domain.
Although the analysis of the solutions showed that the use of CFT did not yield a significantly different number of correct or incorrect solutions, the participants subjectively rated the modeling capacities of CFT significantly higher in terms of model consistency, clarity, and maintainability. The results are promising for the potential of CFT as a model-based approach.
In this paper we apply diagnostic analysis to gain a deeper understanding of the performance of the the keyword search system that we have developed for conversational telephone speech in the IARPA Babel program. We summarize the Babel task, its primary performance metric, “actual term weighted value” (ATWV), and our recognition and keyword search systems. Our analysis uses two new oracle ATWV measures, a bootstrap-based ATWV confidence interval, and includes a study of the underpinnings of the large ATWV gains due to system combination. This analysis quantifies the potential ATWV gains from improving the number of true hits and the overall quality of the detection scores in our system's posting lists. It also shows that system combination improves our systems' ATWV via a small increase in the number of true hits in the posting lists.
This paper describes the acquisition, transcription and annotation of a multi-media corpus of academic spoken English, the LMELectures. It consists of two lecture se-ries that were read in the summer term 2009 at the com-puter science department of the University of Erlangen-Nuremberg, covering topics in pattern analysis, machine learning and interventional medical image processing. In total, about 40 hours of high-definition audio and video of a single speaker was acquired in a constant recording en-vironment. In addition to the recordings, the presentation slides are available in machine readable (PDF) format. The manual annotations include a suggested segmenta-tion into speech turns and a complete manual transcrip-tion that was done using BLITZSCRIBE2, a new tool for the rapid transcription. For one lecture series, the lecturer assigned key words to each recordings; one recording of that series was further annotated with a list of ranked key phrases by five human annotators each. The corpus is available for non-commercial purpose upon request.