Refine
Document Type
- Conference publications (28) (remove)
Keywords
- BPEL (10)
- SOA (6)
- conformance testing (5)
- Cloud Computing (3)
- Platform as a Service (3)
- BPMN (2)
- PaaS (2)
- Portability (2)
- engine (2)
- metrics (2)
Institute
- Lehrstuhl für Praktische Informatik (28) (remove)
Workflow Management Systems (WfMSs) are a type of middleware that enables the execution of automated business processes. Users rely on WfMSs to construct flexible and easily maintainable software systems. Significant effort has been invested into standardising languages for business processes execution, with standards such as the Web Services Business Process Execution Language 2.0 or the Business Process Model and Notation 2.0. Standardisation aims at avoiding vendor lockin and enabling WfMS users to compare different systems. The reality is that, despite standardisation efforts, different independent research initiatives show that objectively comparing WfMSs is still challenging. As a result, WfMS users are likely to discover unfulfilled expectations while evaluating and using these systems. In this work, we discuss the findings of two research initiatives dealing with WfMSs benchmarking, presenting unfulfilled expectations and lessons learned concerning WfMSs’ usability, reliability, and portability. Our goal is to provide advice for practitioners implementing or planning to use WfMSs.
The Business Process Model and Notation 2.0 is nowadays the de-facto standard for process and workflow modeling. It is supported by many modeling tools and engines which are able to consume and execute processes modeled in the native BPMN 2.0 syntax. Despite its popularity, there are still issues and drawbacks regarding standard compliance of BPMN 2.0 process models. This paper elaborates on reasons for such compliance problems and describes how those compliance issues can be revealed by performing automated checks. Finally, it is shown that standard compliance is still an issue by analyzing a set of process models.
Today, process languages are frequently used for implementing service-oriented systems and a variety of specifications for this task exist. These specifications strive for the portability of processes among different runtime environments, i.e., process engines. However, direct portability, especially of executable processes, is seldom achieved. If processes cannot be ported directly among engines, an option is to adapt them. Such an adaptation is nontrivial and hence automated support is desirable. A first step in this direction is the quantification of the design-time adaptability of a process. This quantification is the goal of this paper. We formally define software metrics for measuring the design-time adaptability of processes and validate them theoretically with respect to measurement theory and construct validity using two validation frameworks. Moreover, we implement the metrics computation for Business Process Model and Notation (BPMN) processes and demonstrate their practical applicability with an evaluation of a large set of open source processes.
Business process management and automation has been the focus of intense research for a long time. Today, a plethora of process languages for specifying and implementing process models have evolved. Examples for such languages are established international standards, such as the Web Services Business Process Execution Language 2.0 or, more recently, the Business Process Model and Notation 2.0. Implementations of these standards which are able to execute models, so called process engines, differ in the quality of service they provide, e.g., in performance or usability, but also in the degree to which they actually implement a given standard. Selecting the “best” engine for a particular usage scenario is hard, as none of the existing process standards features an objective certification process to assess the quality of its implementations. To fill this gap, we present our work on process engine benchmarking. We discuss what has been achieved so far and point out future directions that deserve further investigation.
Platform as a Service is the major productivity enabler in the cloud computing stack.
By providing managed and highly automated application environments, it enhances developer productivity and reduces developer operations and maintenance efforts.
The market, however, is fast-changing and offerings are differing conceptually as well as in their supported technological ecosystem.
Therefore, provider selection is an important but currently not well supported step for companies trying to benefit from the technology.
Influenced by the diversity of service offerings and the absence of applied standards this is a tedious task, especially for ensuring application portability.
In this paper, we present a multi-criteria selection approach for cloud platforms based on a field-tested ontology and a comprehensive data set.
The methodology is enhanced by semantic algorithms and mappings to reduce hidden query and data biases.
This allows not only the exact matching of requirements but also the evaluation of possible alternatives that can be adapted to fit the defined requirements.
We validate our approach by contrasting real user queries against the results of our semantically enhanced algorithms.
From its early stages, cloud computing has evolved from being a principal source for computing resources to a fully fledged alternative for rapid application deployment. Especially the service model Platform as a Service facilitates the hosting of scalable applications in the cloud by providing managed and highly automated application environments. Although most offerings are conceptually comparable to each other, the interfaces for application deployment and management vary greatly between vendors. Despite providing similar functionalities, technically different workflows and commands provoke vendor lock-in and hinder portability as well as interoperability. To that end, we present a unified interface for application deployment and management among cloud platforms. We validate our proposal with a reference implementation targeting four leading cloud platforms. The results show the feasibility of our approach and promote the possibility of portable DevOps scenarios in PaaS environments.
Over the last years, the utilization of cloud resources has been steadily rising and an increasing number of enterprises are moving applications to the cloud. A leading trend is the adoption of Platform as a Service to support rapid application deployment. By providing a managed environment, cloud platforms take away a lot of complex configuration effort required to build scalable applications. However, application migrations to and between clouds cost development effort and open up new risks of vendor lock-in. This is problematic because frequent migrations may be necessary in the dynamic and fast changing cloud market. So far, the effort of application migration in PaaS environments and typical issues experienced in this task are hardly understood. To improve this situation, we present a cloud-to-cloud migration of a real-world application to seven representative cloud platforms. In this case study, we analyze the feasibility of the migrations in terms of portability and the effort of the migrations. We present a Docker-based deployment system that provides the ability of isolated and reproducible measurements of deployments to platform vendors, thus enabling the comparison of platforms for a particular application. Using this system, the study identifies key problems during migrations and quantifies these differences by distinctive metrics.
Today, process-aware systems are ubiquitous. They are built by leveraging process languages for both business and implementation perspectives. In the typical context of a Web Services-based Service-oriented Architecture, the obvious choice to implement service orchestrations is still the Business Process Execution Language (BPEL). For BPEL, a variety of open source and commercial engines have emerged. Although the BPEL standard document defines a set of static analysis rules which should be checked by engines prior to deployment to be standard conformant, previous work revealed that most engines are not capable of revealing all violations of these constraints, resulting in costly runtime errors later on. In this paper, we aim to improve the static analysis conformance of BPEL engines. We implement the tool BPELlint that validates 71 static analysis rules of the BPEL specification, show that the tool can be easily integrated into the deployment process of existing engines, and evaluate its performance to measure the effect on the time to deploy. The results demonstrate that BPELlint can improve the static analysis conformance of BPEL engines with an acceptable performance overhead.
Service-oriented systems are increasingly implemented in a process-based fashion. Multiple languages for building process-based systems are available today, but the Business Process Model and Notation (BPMN) is becoming ubiquitous. With BPMN 2.0 released in 2011, execution semantics were introduced, supporting the definition of executable processes. Nowadays, more and more process engines directly support the execution of BPMN processes. However, the BPMN specification is lengthy and complex. As there are no official tests and no certification authority, it is very likely that engines a) implement only a subset of the language features and b) implement language features differently. In other words, we suspect that engines do not conform to the standard, despite the fact that they claim support for it. This prohibits the porting of processes between different BPMN vendors, which is an acclaimed goal of the language. In this paper, we investigate the standard conformance of open source BPMN engines to provide a clear picture of the current state of the implementation of BPMN. We develop a testing approach that allows us to build fully BPMN-compliant tests and automatically execute these tests on different engines. The results demonstrate that state of-the-art BPMN engines only support a subset of the language. Moreover, they indicate that porting BPMN processes is only feasible when using basic language constructs.
In 2007, OASIS finalized their Business Process
Execution Language 2.0 (BPEL) specification which defines
an XML-based language for building orchestrations of Web
Services. As the validation of BPEL processes against the
official BPEL XML schema leaves room for a plethora of static
errors, the specification contains 94 static analysis rules to cover
all static errors. According to the specification, any violations
of these rules are to be checked by a standard conformant
engine at deployment time. When a violation is not detected
in BPEL processes during deployment, such errors remain
unnoticed until runtime, making them expensive to find and fix.
In this work, we investigate whether mature BPEL engines that
claimed standard conformance implement these static rules.
To answer this question, we formalize the static rules and
derive test cases based on these formalizations to evaluate
the degree of support for static analysis of six open source
BPEL engines using the BPEL Engine Test System (betsy). In
addition, we propose a method to get more accurate static
analysis conformance results by taking the feature conformance
of engines into account to exclude false positives in contrast
to the classic approach. The results reveal that support for
static analysis in these engines varies greatly, ranging from
nonexistent to full support. Furthermore, our proposed method
outperforms the classic one in terms of accuracy.