Refine
Year of publication
Document Type
- Article (386)
- Conference Proceeding (318)
- Part of a Book (71)
- Workingpaper / Report (34)
- Master's Thesis (7)
- Book (5)
- Doctoral Thesis (5)
- Collection (4)
- Patent (3)
- Preprint (3)
Language
- English (847) (remove)
Keywords
- VSVR (39)
- DOAJ (37)
- Virtual (TV) Studio (23)
- FHD (22)
- Forena (12)
- Poster (8)
- security (8)
- security requirement (7)
- DEAL (6)
- Lehre (6)
Department/institution
- Fachbereich - Medien (342)
- Fachbereich - Maschinenbau und Verfahrenstechnik (293)
- Creative Media Production and Entertainment Computing (142)
- Fachbereich - Sozial- & Kulturwissenschaften (90)
- Fachbereich - Elektro- & Informationstechnik (67)
- Sound and Vibration Engineering (42)
- Digitale Vernetzung und Informationssicherheit (30)
- Fachbereich - Architektur (22)
- Fachbereich - Wirtschaftswissenschaften (17)
- Intelligente Mensch-Technik-Interaktion (12)
The interpretation process of complex data sets makes the integration of effective interaction techniques crucial. Recent work in the field of human-computer interaction has shown that there is strong evidence that multimodal user interaction, i.e. the integration of various input modalities and interaction techniques into one comprehensive user interface, can improve human performance when interacting with complex data sets. However, it is still unclear which factors make these user interfaces superior to unimodal user interfaces. The contribution of this work is an analytical comparison of a multimodal and a unimodal user interface for a scientific visualization application. We show that multimodal user interaction with simultaneously integrated speech and gesture input improves user performance regarding efficiency and ease of use.
Listening to music played by small MP3-players is almost as ubiquituous as talking over mobile phones. In this paper we analyse interaction elements and usabilty of current MP3-players. To determine the usability, an empirical study based on a website that presented three simulated MP3-players was performed with over 1000 participants.
The results show that, despite most marketing claims, some interaction devices are not as intuitive and really suitable for the task as they schould be. Also we found some surprising results concerning erroneous usage of navigation keys.
A mobile phone the size of a candy bar offers dozens of complex functions, a masterpiece of engineering. Unfortunately, the more functions are available, the less they are accessible to the average user. The design of the user interface suffers from a lack of suitability for the tasks, does not conform to user expectations and a suboptimal self-descriptiveness. The usability of modern mobile phones was tested in a broad survey with over 1300 participants. An internet based simulation offered tasks and an online evaluation. It could be pointed out that mobile phones are not only hard to access for novices but also those who consider themselves experts have difficulties when confronted with unknown functions or another brand of phone. Approaches to increase the usability are discussed.
Based on experience in teaching programming, we developed the integrated development environment (IDE) 5Code especially to support beginners. As a first step, a simple, understandable formula was developed how to advance from the problem to the program in 5 operative steps:
read it → get it → think it → note it → code it.
In order to reduce the cognitive load of the learners effectively, 5Code was designed such that all 5 steps are permanently presented, accessible and executable. Thus, learners are provided with the entire programming context from presentation of the task via own notes and annotations to the code area. Learners can mark and annotate any part of the given task’s text; these annotations can be edited as notes with own comments. Furthermore, the notes can be dragged into the code area, where they are shown as comments in the coding language. Any modifications in the comments are synchronized between notes and code. 5Code is implemented as a web-application. It is used in university introductory courses on object oriented programming.
We present a security engineering process based on security problem frames and concretized security problem frames. Both kinds of frames constitute patterns for analyzing security problems and associated solution approaches. They are arranged in a pattern system that makes dependencies between them explicit. We describe step-by-step how the pattern system can be used to analyze a given security problem and how solution approaches can be found. Further, we introduce a new frame that focuses on the privacy requirement anonymity.
We give an enumeration of possible problem frames, based on domain characteristics, and comment on the usefulness of the obtained frames. In particular, we investigate problem domains and their characteristics in detail. This leads to fine-grained criteria for describing problem domains. As a result, we identify a new type of problem domain and come up with integrity conditions for developing useful problem frames. Taking a complete enumeration of possible problem frames (with at most three problem domains, of which only one is constrained) as a basis, we find 8 new problem frames, 7 of which we consider as useful in practical software development.
Preserving Software Quality Characteristics from Requirements Analysis to Architectural Design
(2006)
We present a pattern system/or security requirements engineering, consisting of security problem frames and concretized security problem frames. These are special kinds of problem frames that serve to structure, characterize, analyze, and finally solve software development problems in the area of software and system security. We equip each frame with formal preconditions and postconditions. The analysis of these conditions results in a pattern system that explicitly shows the dependencies between the different frames. Moreover, we indicate related frames, which are commonly used together with the considered frame. Hence, our approach helps security engineers to avoid omissions and to cover all security requirements that are relevant for a given problem.
We present a process to develop secure software with an extensive pattern-based security requirements engineering phase. It supports identifying and analyzing conflicts between different security requirements. In the design phase, we proceed by selecting security software components that achieve security requirements. The process enables software developers to systematically identify, analyze, and finally realize security requirements using security software components. We illustrate our approach by a lawyer agency software example.
We present a threat and risk-driven methodology to security requirements engineering. Our approach has a strong focus on gathering, modeling, and analyzing the environment in which a secure ICT-system to be built is located. The knowledge about the environment comprises threat and risk models. This security-relevant knowledge is used to assess the adequacy of security mechanisms, which are selected to establish security requirements.
This paper presents a conceptual framework for security engineering, with a strong focus on security requirements elicitation and analysis. This conceptual framework establishes a clear-cut vocabulary and makes explicit the interrelations between the different concepts and notions used in security engineering. Further, we apply our conceptual framework to compare and evaluate current security requirements engineering approaches, such as the Common Criteria, Secure Tropos, SREP, MSRA, as well as methods based on UML and problem frames. We review these methods and assess them according to different criteria, such as the general approach and scope of the method, its validation, and quality assurance capabilities. Finally, we discuss how these methods are related to the conceptual framework and to one another.
Interdisciplinary communities involve people and knowledge from different disciplines in addressing a common challenge. Differing perspectives, processes, methods, tools, vocabularies, and standards are problems that arise in this context. We present an approach to support bringing together disciplines based on a common body of knowledge (CBK), in which knowledge from different disciplines is collected, integrated, and structured. The novelty of our approach is twofold: first, it introduces a CBK ontology, which allows one to semantically enrich contents in order to be able to query the CBK in a more elaborate way afterwards. Second, it heavily relies on user participation in building up a CBK, making use of the Semantic MediaWiki as a platform to support collaborative writing. The CBK ontology is backed by a conceptual framework, consisting of concepts to structure the knowledge, to provide access options to it, and to build up a common terminology. To ensure a high quality of the provided contents and to sustain the community’s commitment, we further present organizational means as part of our approach. We demonstrate our work using the example of a Network of Excellence EU project, which aims at bringing together researchers and practitioners from services computing, security and software engineering.
The ISO 27000 is a well-established series of information security standards. The scope for applying these standards can be an organisation as a whole, single business processes or even an IT application or IT infrastructure. The context establishment and the asset identification are among the first steps to be performed. The quality of the results produced when performing these steps has a crucial influence on the subsequent steps such as identifying loss, vulnerabilities, possible attacks and defining countermeasures. Thus, a context analysis to gather all necessary information in the initial steps is important, but is not offered in the standard. In this paper, we focus on the scope of cloud computing systems and present a way to support the context establishment and the asset identification described in ISO 27005. A cloud system analysis pattern and different kinds of stakeholder templates serve to understand and describe a given cloud development problem, i.e. the envisaged IT systems and the relevant parts of the operational environment. We illustrate our support using an online banking cloud scenario.
The Security Twin Peaks
(2011)
In this paper, we present an approach to adopt UMLsec, which is defined for UML 1.5, to support the current UML version 2.3. The new profile UMLsec4UML2 is technically constructed as a UML profile diagram, which is equipped with a number of integrity conditions expressed using OCL. Consequently, the UMLsec4UML2-profile can be loaded in any Eclipse-based EMF- and MDT-compatible UML editing tool to develop and analyze different kinds of security models. The OCL constraints replace the static checks of the tool support for the old UMLsec defined for UML 1.5. Thus, the UMLsec4UML2-profile not only provides the whole expresiveness of UML2.3 for security modeling, it also brings considerably more freedom in selecting a basic UML editing tool, and it integrates modeling and analyzing security models. Since UML2.3 comprises new diagram types, as well as new model elements and new semantics of diagram types already contained in UML1.5, we consider a number of these changes in detail. More specifically, we consider composite structure and sequence diagrams with respect to modeling security properties according to the original version of UMLsec. The goal is to use UMLsec4UML2 to specify architectural security patterns.
In this paper, the author aim to present a threat and risk-driven methodology to security requirements engineering. The chosen approach has a strong focus on gathering, modeling, and analyzing the environment in which a secure ICT-system to be built is located. The knowledge about the environment comprises threat and risk models. As presented in the paper, this security-relevant knowledge is used to assess the adequacy of security mechanisms, which are then selected to establish security requirements.
Developing security-critical systems is difficult, and there are many well-known examples of vulnerabilities exploited in practice. In fact, there has recently been a lot of work on methods, techniques, and tools to improve this situation already at the system specification and design. However, security-critical systems are increasingly long-living and undergo evolution throughout their lifetime. Therefore, a secure software development approach that supports maintaining the needed levels of security even through later software evolution is highly desirable. In this chapter, we recall the UMLsec approach to model-based security and discuss on tools and techniques to model and verify evolution of UMLsec models.
The authors present a security engineering process based on security problem frames and concretized security problem frames. Both kinds of frames constitute patterns for analyzing security problems and associated solution approaches. They are arranged in a pattern system that makes dependencies between them explicit. The authors describe step-by-step how the pattern system can be used to analyze a given security problem and how solution approaches can be found. Afterwards, the security problems and the solution approaches are formally modeled in detail. The formal models serve to prove that the solution approaches are correct solutions to the security problems. Furthermore, the formal models of the solution approaches constitute a formal specification of the software to be developed. Then, the specification is implemented by generic security components and generic security architectures, which constitute architectural patterns. Finally, the generic security components and the generic security architecture that composes them are refined and the result is a secure software product built from existing and/or tailor-made security components.
Considering legal aspects during software development is a challenging problem, due to the cross-disciplinary expertise required. The problem is even more complex for cloud computing systems, because of the international distribution, huge amounts of processed data, and a large number of stakeholders that own or process the data. Approaches exist to deal with parts of the problem, but they are isolated from each other. We present an integrated method for elicitation of legal requirements. A cloud computing online banking scenario illustrates the application of our methods. The running example deals with the problem of storing personal information in the cloud and based upon the BDSG (German Federal Data Protection Act). We describe the structure of the online banking cloud system using an existing pattern-based approach. The elicited information is further refined and processed into functional requirements for software development. Moreover, our method covers the analysis of security-relevant concepts such as assets and attackers particularly with regard to laws. The requirements artifacts then serve as inputs for existing patterns for the identification of laws relevant for the online banking cloud system. Finally, our method helps to systematically derive functional as well as security requirements that realize the previously identified laws.
The discipline of engineering secure software and services brings together researchers and practitioners from software, services, and security engineering. This interdisciplinary community is fairly new, it is still not well integrated and is therefore confronted with differing perspectives, processes, methods, tools, vocabularies, and standards. We present a Common Body of Knowledge (CBK) to overcome the aforementioned problems. We capture use cases from research and practice to derive requirements for the CBK. Our CBK collects, integrates, and structures knowledge from the different disciplines based on an ontology that allows one to semantically enrich content to be able to query the CBK. The CBK heavily relies on user participation, making use of the Semantic MediaWiki as a platform to support collaborative writing. The ontology is complemented by a conceptual framework, consisting of concepts to structure the knowledge and to provide access to it, and a means to build a common terminology. We also present organizational factors covering dissemination and quality assurance.
An ISO 27001 compliant information security management system is difficult to create, due to the the limited support for system development and documentation provided in the standard. We present a structured analysis of the documentation and development requirements in the ISO 27001 standard. Moreover, we investigate to what extent existing security requirements engineering approaches fulfill these requirements. We developed relations between these approaches and the ISO 27001 standard using a conceptual framework originally developed for comparing security requirements engineering methods. The relations include comparisons of important terms, techniques, and documentation artifacts. In addition, we show practical applications of our results.
The “Journal of Virtual Reality and Broadcasting” is an open access E-journal covering advanced media technology for the integration of human computer interaction and modern information systems. The main focus is on the creation of synergies between such basic technologies as computer graphics and state-of-the-art broadcasting techniques.
The main goals are to publish research results in the field of Virtual Reality and Broadcasting, to provoke discussions, and to promote the exchange of ideas and information. Developments in the area have a direct effect on society, therefore social aspects will also be considered. As an interdisciplinary field Virtual Reality requires multilateral collaboration in order to enable new applications.
RB publishes articles consecutively and in electronic form only. All articles are peer-reviewed in a strict review process by at least three independent experts from the appropriate field of research and appear in the English language. The articles are organized in one volume per year with ten to twenty articles. Material that has been previously presented at conferences undergo a major revision and are extended and modified by the authors with at least 20% new material according to the Journal's policy for previously published articles. The Journal has been established in 2004.
Currently, the submission and publication of articles is free of charge. No author fees are applied.
Marktdaten
(2018)
Die in der Schriftenreihe Trading verwendeten Forschungsdaten beinhalten historische Kursdaten, geliefert von Lenz+Partner AG (Deutschland), für eine Auswahl an Aktien. Die Datei „FEBRDUSA_1“ zeigt diese Auswahl in Form der deutschen Wertpapierkennnummer (Spalte „Titel“), des Handelszeitraums (Spalten „Beginn“ und „Ende“) und des Aktiennamens. Kursdaten sind in dieser Datei nicht enthalten, sondern liegen in einer umfangreichen Datenbank vor.
Die Marktdaten umfassen 867 Aktien und sind das Ergebnis eines im Jahr 2015 durchgeführten Auswahlprozesses mit dem Ziel, Datenqualität und Handelbarkeit der Aktien zu verbessern. Es wurden folgende Bedingungen gefordert:
- Die Aktien werden an der Frankfurter Börse gehandelt.
- Sie sind in einem der deutschen Aktienindizes DAX, MDAX, TECDAX, SDAX, HDAX, CDAX, Technology All Share, Prime All Share und GEX oder in den amerikanischen Aktienindizes S&P 500 oder Nasdaq 100 gelistet.
- Die Kursdaten der in Frankfurt gehandelten Aktien enden nicht vor dem Jahr 2014.
- Der (unbereinigte) Eröffnungskurs Ende 2013 beträgt mindestens 1 EUR. Diese Bedingung soll dazu beitragen, das Handeln von „penny stocks“ zu verhindern.
Die vorliegende Marktdefinition resultierte in 867 Aktien, die frühestens am 11. Dezember 2014 enden, sodass eine Auswertung der Marktdaten mit vollständigen Kurswerten bis zu diesem Zeitpunkt möglich ist. Der Markt ist groß genug, um interessante Kaufbedingungen mit einer ausreichend großen Anzahl an Kaufkandidaten zu untersuchen.
The OECD Base Erosion Profit Shifting (BEPS) Initiative as well as the current fairness oriented public discussion regarding the taxation of digital business models highlight the importance and complexity of the arm's length principle. In a theoretical model of an internationally fragmented digital good's production process, we show that fairness considerations of tax authorities (namely inequity aversion) can result in a falling apart between a perceived "fair" and arm's length distribution of profits across tax jurisdictions. Our model predicts that a multinational firm follows the fundamental paradigm of international taxation, i.e. the arm's length principle, to properly incentivize internal agents involved in the production of a digital good. However, with inequity averse tax authorities, we find that tax authorities "prefer" a more equal distribution of profits compared to the arm's length allocation. From a multinational firm's perspective, inequity aversion among tax authorities dampens the strategic effect to - in accordance with arm's length principle - shift profits to low tax countries.
Progressive web applications (PWAs) seem to be the next big thing in the mobile landscape. These are web applications with “super-powers” which are meant to provide the kinds of user experiences previously only native mobile applications could. Therefore they are able to match native applications in their capabilities but still thrive on the reachability of the web. They achieve this by implementing certain characteristics which originate from innovative standards and sophisticated best practices. Conveniently, as it is the web, PWAs are working seamlessly cross-platform. One of the closest things to a PWA so far may be an application created with the NativeScript framework. Such an application is also developed with web technologies yet able to use any native interface directly. This is made possible by using a designated runtime for mediating between JavaScript and the mobile system. Eventually, NativeScript is able to offer a high degree nativity yet also of convenient abstraction during development. This paper is set out to deliver further insight into both approaches by contrasting them based on the characteristics of PWAs. For this purpose, these characteristics are elaborated and adjusted to be applicable to native mobile applications. Hence a reasonable basis for what to expect from a native application is derived. Then NativeScript is assessed on this basis by the means of transferable concepts and technologies as well as a prototypic mobile application. Finally, an informed discussion and conclusion is performed based on the results. The comprehensive characteristics of PWAs resolve previous shortcomings with well thought out concepts and new technologies. These are transferable to native mobile applications to a far extent and may also be put into practice with NativeScript. The framework’s approach may be very well called ingenious, however, in the long run it might fall short of the innovative concept of PWAs. Having said this, it is still able to serve as a practical measure for creating appealing user experiences for multiple systems with arguably little effort. Due to large overlap a transformation between the two application types is a realistic option. Either way, web technologies are far from what they used to be and it is exciting to see how the mobile and web landscape will evolve in the future.
While developing for multiple platforms at once seems like a convenient solution, there are several challenges arising when trying to abstract the entire mobile development. This paper is meant to evaluate current cross-platform development for mobile applications. The background for its necessity, its conceptual approach and the problems to face when developing cross-platform were determined and explained in detail. Afterwards, certain solutions were evaluated against the former insights. Based on the results, an informed discussion and conclusion was performed. The mobile environment consist of only two big players by now, Android and iOS. These operating systems differ in architecture, design and consequently in the way applications are developed for each of them. Therefore, high demands are made towards cross-platform solutions. Tools which allow for the creation of applications for multiple platforms at once have to match native applications in regard to user experience and performance. At the same time they need to be able to optimize developments with the goal of being cost efficient. Apache Cordova, Xamarin and NativeScript were selected for evaluation in regard to their ability to meet these requirements. Cordova acts as the comparison group of cross-platform tools. It is the big player in the field and there are reasons for this. However, aspiring solutions with higher nativity and ambitious approaches are emerging. Xamarin and NativeScript deliver top quality results while offering loosely coupled developments. Therefore it is possible to develop high quality applications and still benefit from the advantages of platform-independent solutions. As a consequence mobile development is about to change in the foreseeable future. More sophisticated approaches may lead to a higher number of developments done cross-platform, and rightfully so.
Investing as Random Trial
(2017)
We introduce an investment algorithm for a market of individual securities. The investment algorithm is derived from constraints depending on investment parameters in order to limit the risk and to take into account an individual investor. One constraint is devoted to trading costs. Purchased securities are selected randomly among securities that meet the buy condition, making trading a random trial. Simulations with historical price data are demonstrated for a simple example: The buy condition is evaluated on the basis of the price relationship for two subsequent trading days and the sales condition is defined by holding securities only for one day. A trading expert evaluates the expected return for the investment algorithm with respect to the random selection. Thus, the expert informs precisely on how many market players perform using the same investment algorithm. Its findings are for a parametrized set of buy conditions simultaneously, which makes a trading expert a valuable tool for theorists as well as for practitioners. In our example, the trading expert demonstrated clearly a significant mean reversion effect for a horizon of one day.
Reconciling work and family life is one of the main issues of welfare state policies in the fields of childcare and long-term care. On that account, policy and research are focused almost exclusively on women – often on the reconciliation of work and childbearing – and social policy at the state level. In our study, we concentrate on men who reconcile gainful employment with elderly care, and we include the company level – a level of analysis often neglected in traditional theoretical approaches and typologies of comparative welfare state research. In Germany, during the last decade, the share of sons who are responsible for taking care of their elderly relatives has remarkably increased. In our qualitative research, we carried out comparative case studies in eleven German companies. We conducted around 60 interviews with male employees caring for an elderly relative, as well as with members of the works councils and human resources departments in different kinds of companies. We analysed which familial, social, professional, legal as well as occupational resources are central for these men, how they cope with reconciling work and care, and which gaps in the welfare system they identify. Interestingly, the overwhelming majority of the caring sons claim not to have problems in reconciling work and care, although they spend significant time on caring. In this paper we try to explain this pattern by looking at their typical care arrangements. We found that while women tend to organise employment around care, men rather seem to organise care around their employment. Given the feminist critique of the “adult worker model” this is an interesting result and needs theoretical reflection. Do men have the solution to the care-blindness of the “adult worker model” without falling into the “cold modern model of care”? Which resources are mainly used in “adult worker care arrangements”? Where are the limits of the approach?
The construction of a basis of a certain lattice of interest is a basic tool in many fields of algorithmic number theory. All too often we can not compute with the original lattices because of irrational numbers involved but have to work with approximations of them. While helpful bounds were shown about the reduction of lattice bases in cite{buchmann94reducing}, here we introduce the notion of a $(epsilon,delta)$-constructable basis of a lattice and determine the precision of vectors that is necessary to extend a set to a $(epsilon,delta)$-constructable basis.
The German banking system and the global financial crisis: causes, developments and policy responses
(2009)
Germany’s banking sector has been severely hit by the global financial crisis. In a German context as of February, 2009, this paper reviews briefly the structure of the banking industry, quantifies effects of the crisis on banks and surveys responses of economic policy. It is argued that policy design needs to enhance transparency and enforce the liability principle. In addition, economic policy should not eclipse principles of competition policy.
CFX 11.0 is a Computational Fluid Dynamics (CFD) program for simulating the behavior of systems involving fluid flow, heat transfer, and other related physical processes. It works by solving the equations of fluid flow (in a special form) over a region of interest, with specified (known) conditions on the boundary of that region. This tool is used here to analyse the straw cutter which is used to comminute the straw which is left standing in the field after stripper header has stripped the grains from the crops. Stripper Header is a new technique different to conventional harvester machines and it is still under development and testing. It works on the principle in which grains are first stripped off from the head of the standing crop which helps in much reduced intake freeing up the separation systems to handle a larger volume of grains. But with this, the new problem of the disposal of the long, uncut straw left after stripping of grains arose. So a straw cutter is used after the stripper header stage in this Combine Harvester, where the rotor is equipped with pressed steel blades and rotates towards the crop, i.e., it cuts and lifts the straw over the top. This technique is environment friendly because previously farmers would have to burn the long, uncut straw standing in the fields while in this case mulched straw can be used for min-till systems. This rotor based straw cutting machine is analysed here which helps us in increasing power efficiency and comminution rate and quality of cut straw particles. Here fluid flow simulation has been preferred over particle simulation: - CFX 10.0 tool is not very effective for particle simulation techniques and computation time is increased considerably. - It’s very hard to realize particle simulation in CFX 10.0 because we have to define size and properties of each particle and besides this wall conditions would have to be defined differently for every boundary. - Besides this, it’s hard to realize cutting effect with straw particles in CFX 10.0 which otherwise would be comminuted in real life conditions. - The results obtained from fluid simulation are validated using experimental results and found to be nearly the same which clearly contradicts any need of performing particle simulation.
This dissertation, written for the course Master of European Social Policy Analysis – National University of Ireland Maynooth - , shows the context or so-called framework within which the medical social work service in two different welfare states: Ireland and Germany operates. This means the societal context as a whole, the health system in general (macro-level) and the hospital setting (mezzo-level) in particular. The idea of this publication is to give a short overview of the historic development of medical social work (micro-level) in Ireland and Germany and some background information about what medical social work is. The time frame examined starts from the 1980s up to present. The main questions to be answered are: What are the interdependencies between the macro-, mezzo- and micro-level? Have changes in the health system in general and the hospitals in particular influenced medical social work and if so how? What are the similarities and differences in the two countries?
Broader use of virtual reality environments and sophisticated animations spawn a need for spatial sound. Until now, spatial sound design has been based very much on experience and trial and error. Most effects are hand-crafted, because good design tools for spatial sound do not exist. This paper discusses spatial sound authoring and its applications, including shared virtual reality environments based on VRML. New utilities introduced by this research are an inspector for sound sources, an interactive resource manager, and a visual soundscape manipulator. The tools are part of a sound spatialization framework and allow a designer/author of multimedia content to monitor and debug sound events. Resource constraints like limited sound spatialization channels can also be simulated.
The Sound Spatialization Framework is a C++ toolkit and development environment for providing advanced sound spatialization for virtual reality and multimedia applications. The Sound Spatialization Framework provides many powerful display and user-interface features not found in other sound spatialization software packages. It provides facilities that go beyond simple sound source spatialization: visualization and editing of the soundscape, multiple sinks, clustering of sound sources, monitoring and controlling resource management, support for various spatialization backends, and classes for MIDI animation and handling.
Education at the University of Aizu is focussed upon computer science. Besides being the subject matter of many courses, however, the computer also plays a vital role in the educational process itself, both in the distribution of instructional media, and in providing students with valuable practical experience. All students have unlimited access (24-hours-a-day) to individual networked workstations, most of which are multimedia-capable (even video capture is possible in two exercise rooms). Without software and content tailored for computer-aided instruction, the hardware becomes an expensive decoration. In any case, there is a need to better educate the instructors and students in the use of the equipment. In the interest of facilitating effective, collaborative use of network-based computers in teaching, this article explores the impact that a network environment can have on such activities. First, as a general overview, and to examine the motivation for the use of a network environment in teaching, this article reviews a range of different styles of collaboration. Then the article shows what kind of tools are available for use, within the context of what has come to be called Computer-Supported Cooperative Work (CSCW).
The task of the Center for Language Research is to provide content-based English language instruction for students of computer science and engineering. As such, we find ourselves at the confluence of many of the streams currently running through the English Language Teaching profession, including English for Science and Technology (EST), English for Academic Purposes (EAP), English for Specific Purposes (ESP), Computer-assisted language learning (CALL), content-based instruction, and multimedia applications in foreign language pedagogy. This paper describes our initial attempts to construct a number of World Wide Web pages where students will be able to study EST, EAP, and computer science topics on their own in a multimedia environment.