In the post-proceedings of the Workshop "Visibility in Information Spaces and in Geographic Environments" a selection of research papers is presented where the topic of visibility is addressed in different contexts. Visibility governs information selection in geographic environments as well as in information spaces and in cognition. The users of social media navigate in information spaces and at the same time, as embodied agents, they move in geographic environments. Both activities follow a similar type of information economy in which decisions by individuals or groups require a highly selective filtering to avoid information overload. In this context, visibility refers to the fact that in social processes some actors, topics or places are more salient than others. Formal notions of visibility include the centrality measures from social network analysis or the plethora of web page ranking methods. Recently, comparable approaches have been proposed to analyse activities in geographic environments: Place Rank, for instance, describes the social visibility of urban places based on the temporal sequence of tourist visit patterns. The workshop aimed to bring together researchers from AI, Geographic Information Science, Cognitive Science, and other disciplines who are interested in understanding how the different forms of visibility in information spaces and geographic environments relate to one another and how the results from basic research can be used to improve spatial search engines, geo-recommender systems or location-based social networks.
Current software model checkers quickly reach their limit when being applied to verifying pointer safety properties in source code that includes function pointers and inlined assembly. This paper introduces an alternative technique for checking pointer safety violations, called Symbolic Object Code Analysis (SOCA), which is based on bounded symbolic execution, incorporates path-sensitive slicing, and employs the SMT solver Yices as its execution and verification engine. Extensive experimental results of a prototypic SOCA Verifier, using the Verisec suite and almost 10,000 Linux device driver functions as benchmarks, show that SOCA performs competitively to current source-code model checkers and that it also scales well when applied to real operating systems code and pointer safety issues. SOCA effectively explores semantic niches of software that current software verifiers do not reach.
Business-To-Business Integration (B2Bi) is a key mechanism for enterprises to gain competitive advantage. However, developing B2Bi applications is far from trivial. Inter alia, agreement among integration partners about the business documents and the control flow of business document exchanges as well as applying suitable communication technologies for overcoming heterogeneous IT landscapes are major challenges. At the same time, choreography languages such as ebXML BPSS (ebBP), orchestration languages such as WS-BPEL and Web Services are promising to provide the foundations for seamless interactions among business partners. Automatically translating choreography agreements of integration partners into partner-specific orchestrations is an obvious idea for ensuring conformance of orchestration models to choreography models. Moreover, the application of such model-driven development methods facilitates productivity and cost-effectiveness whereas applying a service oriented architecture (SOA) based on WS-BPEL and Web Services leverages standardization and decoupling. By now, the realization of QoS attributes has not yet received the necessary attention that makes such approaches suitable for B2Bi. In this report, we describe a proof-of-concept implementation of the translation of ebBP choreographies into WS-BPEL orchestrations that respects B2Bi-relevant QoS attributes.
The KI ´09 workshop on Complex Cognition was a joint venture of the Cognition group of the Special Interest Group Artificial Intelligence of the German Computer Science Society (Gesellschaft für Informatik) and the German Cognitive Science Association. Dealing with complexity has become one of the great challenges for modern information societies. To reason and decide, plan and act in complex domains is no longer limited to highly specialized professionals in restricted areas such as medical diagnosis, controlling technical processes, or serious game playing. Complexity has reached everyday life and affects people in such mundane activities as buying a train ticket, investing money, or connecting a home desktop to the internet. Research in cognitive AI can contribute to supporting people navigating through the jungle of everyday reasoning, decision making, planning and acting by providing intelligent support technology. Lessons learned from expert systems research of the nineteen-eighties show that the aim should not be to provide for fully automated systems which can solve specialized tasks autonomously but instead to develop interactive assistant systems where user and system work together by taking advantage of the respective strengths of human and machine. To accomplish a smooth collaboration between humans and intelligent systems, basic research in cognition is a necessary precondition. Insights into cognitive structures and processes underlying successful human reasoning and planning can provide suggestions for algorithm design. Even more important, insights into restrictions and typical errors and misconceptions of the cognitive systems provide information about those parts of a complex task from which the human should be relieved. For successful human-computer interaction in complex domains it has, furthermore, to be decided which information should be presented when, in what way, to the user. We strongly believe that symbolic approaches of AI and psychological research of higher cognition are at the core of success for the endeavor to create intelligent assistant system for complex domains. While insight into the neurological processes of the brain and into the realization of basic processes of perception, attention and senso-motoric coordination are important for the basic understanding of the principles of human intelligence, these processes have a much too fine granularity for the design and realization of interactive systems which must communicate with the user on knowledge level. If human system users are not to be incapacitated by a system, system decisions must be transparent for the user and the system must be able to provide explanations for the reasons of its proposals and recommendations. Therefore, even when some of the underlying algorithms are based on statistical or neuronal approaches, the top-level of such systems must be symbolical and rule-based. The papers presented at this workshop on complex cognition give an inspiring and promising overview of current work in the field which can provide first building stones for our endeavor to create knowledge level intelligent assistant systems for complex domains. The topics cover modelling basic cognitive processes, interfacing subsymbolic and symbolic representations, dealing with continuous time, Bayesian identification of problem solving strategies, linguistically inspired methods for assessing complex cognitive processes and complex domains such as recognition of sketches, predicting changes in stocks, spatial information processing, and coping with critical situations.
Inductive programming is concerned with the automated construction of declarative, often functional, recursive programs from incomplete specifications such as input/output examples. The inferred program must be correct with respect to the provided examples in a generalising sense: it should be neither equivalent to them, nor inconsistent. Inductive programming algorithms are guided explicitly or implicitly by a language bias (the class of programs that can be induced) and a search bias (determining which generalised program is constructed first). Induction strategies are either generate-and-test or example-driven. In generate-and-test approaches, hypotheses about candidate programs are generated independently from the given specifications. Program candidates are tested against the given specification and one or more of the best evaluated candidates are developed further. In analytical approaches, candidate programs are constructed in an example-driven way. While generate-and-test approaches can -- in principle -- construct any kind of program, analytical approaches have a more limited scope. On the other hand, efficiency of induction is much higher in analytical approaches. Inductive programming is still mainly a topic of basic research, exploring how the intellectual ability of humans to infer generalised recursive procedures from incomplete evidence can be captured in the form of synthesis methods. Intended applications are mainly in the domain of programming assistance -- either to relieve professional programmers from routine tasks or to enable non-programmers to some limited form of end-user programming. Furthermore, in the future, inductive programming techniques might be applied to further areas such as supporting the inference of lemmata in theorem proving or learning grammar rules. Inductive automated program construction has been originally addressed by researchers in artificial intelligence and machine learning. During the last years, some work on exploiting induction techniques has been started also in the functional programming community. Therefore, the third workshop on |Approaches and Applications of Inductive Programming| took place for the first time in conjunction with the ACM SIGPLAN International Conference on Functional Programming (ICFP 2009). The first and second workshop were associated with the International Conference on Machine Learning (ICML 2005) and the European Conference on Machine Learning (ECML 2007). AAIP´09 aimed to bring together researchers from the functional programming and the artificial intelligence communities, working in the field of inductive functional programming, and advance fruitful interactions between these communities with respect to programming techniques for inductive programming algorithms, the identification of challenge problems and potential applications. For everybody interested in inductive programming we recommend to visit the website: www.inductive-programming.org.
Im Servicegeschäft werden Eventkorrelationssysteme verwendet, um bevorstehende Systemzustände vorherzusagen und damit Anlagenausfälle zu vermeiden. Aus strategischer Sicht kann dies zu höherer Anlagenverfügbarkeit und bessere Planbarkeit beitragen. Unternehmen, die von diesem Wettbewerbsvorteil profitieren wollen, begegnen dabei häufig zwei grundlegenden Problemen: die Komplexität im Service steigt mit der Anzahl der Korrelationen. Gleichzeitig geht die Kostentransparenz für den Gesamtprozess verloren. Die vorliegende Arbeit zeigt, warum diese beiden Probleme entstehen und wie sie weitgehend vermieden werden können.
In recent years peer-to-peer networks have developed from unstructured to structured overlay networks. Structured peer-to-peer overlay networks facilitate a bounded number of messages in the number of peers for querying and publishing content opposed to unstructured networks, which usually flood the network with messages. To prevent the messages from overflooding the network they have a bounded life time, which may cause that some published content will be lost. Structured peer-to-peer networks guarantee that all published content can be found. One well known structured overlay network is the chord distributed hash table. An implementation of chord is only available in C, but not in Java, which is used widely and often in computer science education. This manual describes an open source implementation of chord in Java - called Open Chord - and how it can be used in Java-based applications. It provides a console-based experimentation environment to explore the functionality of chord, as well.
The way business processes are organised heavily influences the flexibility and the expenses of enterprises. The capability to address changing market needs in a timely manner and to provide appropriate pricing is indispensable in a world of internationalisation and growing competition. Optimising processes that cross enterprise boundaries potentially is a key success factor in achieving this goal but it requires the information systems of the participating enterprises to be consistently integrated. This gives rise to some challenging tasks. The personnel involved in building up business collaborations comes from different enterprises with different business vocabulary and background which requires extensive communication support. The lack of central technical infrastructure, typically prohibited by business politics, often calls for a distributed and computer-aided collaboration structure, so that the resulting complexity must be handled somehow. Nevertheless robustness is an important factor in building business collaborations as these may exchange goods of considerable value. This technical report proposes the use of a two step modelling approach that separates business logic, modelled in the so-called centralised perspective (CP), from its distributed implementation, modelled in the so-called distributed perspective (DP). The separation of these perspectives enables business people to concentrate on business issues and to solve communication problems in the CP whereas technical staff can concentrate on distribution issues. The use of stringent modelling rules is advised in order to provide the basis for formal analysis techniques as one means to achieve robustness. Considering the choreography of RosettaNet Partner Interface Processes (PIPs) as the subject of my analysis, UML activity diagrams for modelling the CP and WSBPEL for modelling the DP are described as enabling techniques for implementing the proposed two step modelling approach. Further, model checking is applied to validate the CP and DP models in order to detect errors in early design phases. As the adequacy of model checking tools highly depends on the detailed modelling techniques as well as the properties to be checked, a major part of our discussion covers relevant properties and requirements for a model checker.
Despite the popularity of BPEL engines to orchestrate complex and executable processes, there are still only few approaches available to help find the most appropriate engine for individual requirements.
One of the more crucial factors for such a middleware product in industry are the performance characteristics of a BPEL engine.
There exist multiple studies in industry and academia testing the performance of BPEL engines, which differ in focus and method.
We aim to compare the methods used in these approaches and provide guidance for further research in this area.
Based on the related work in the field of performance testing, we created a process engine specific comparison framework, which we used to evaluate and classify nine different approaches that were found using the method of a systematical literature survey.
With the results of the status quo analysis in mind, we derived directions for further research in this area.
Prozessorientierte Data-Warehouse-Systeme (DWH-Systeme) stellen, im Vergleich zu klassischen DWH-Systemen, neben entscheidungsunterstützenden Daten zum Ergebnis von Geschäftsprozessen auch Daten zu deren Ablauf bereit. Sie sind dabei auf zwei wesentliche Szenarien ausgerichtet: Das erste Szenario hat die Bereitstellung multidimensionaler, prozessbezogener Daten zum Ziel, mit denen die Gestaltung von Prozessen unterstützt werden kann. Das zweite Szenario hat die Datenbereitstellung und die Entscheidungsfindung mit niedriger Latenz zum Ziel. Es ist auf steuernde Maßnahmen in laufenden Prozessinstanzen ausgerichtet. Zur Unterstützung beider Szenarien wird im vorliegenden Beitrag ein Architekturkonzept für prozessorientierte OLTP- & OLAP-Anwendungssysteme, auf der Basis von Komponenten, vorgeschlagen. Das Architekturkonzept berücksichtigt dabei neben der Realisierung der Funktionen eines prozessorientierten DWH-Systems auch deren Integration mit Funktionen operativer Teilsysteme sowie Funktionen zur automatisierten Entscheidungsfindung.
Weitere im Architekturkonzept berücksichtigte Anforderungen sind die zeit- und bedarfsgerechte Informationsversorgung heterogener Nutzergruppen sowie die flexible Anpassbarkeit an Veränderungen in Geschäftsprozessen.