JBoss.orgCommunity Documentation

jBPM Documentation

Version 6.4.0.CR1


I. Getting Started
1. Overview
1.1. What is jBPM?
1.2. Overview
1.3. Core Engine
1.4. Process Designer
1.5. Data Modeler
1.6. Form Modeler
1.7. Process Instance and Task Management
1.8. Business Activity Monitoring
1.9. Workbench
1.10. Eclipse Developer Tools
2. Getting Started
2.1. Downloads
2.2. Getting Started
2.3. Community
2.4. Sources
2.4.1. License
2.4.2. Source code
2.4.3. Building from source
2.5. Getting Involved
2.5.1. Sign up to jboss.org
2.5.2. Sign the Contributor Agreement
2.5.3. Submitting issues via JIRA
2.5.4. Fork GitHub
2.5.5. Writing Tests
2.5.6. Commit with Correct Conventions
2.5.7. Submit Pull Requests
2.6. What to do if I encounter problems or have questions?
3. jBPM Installer
3.1. Prerequisites
3.2. Downloading the Installer
3.3. Demo Setup
3.4. 10-Minute Tutorial using the Workbench
3.5. 10-Minute Tutorial using Eclipse
3.6. Configuration
3.6.1. Playgrounds
3.6.2. Workbench Authentication
3.6.3. Using your own database with the jBPM installer
3.6.4. jBPM database schema scripts (DDL scripts)
3.6.5. jBPM installer script
3.7. Frequently Asked Questions
4. Examples
4.1. Introduction
4.2. Human Resources Example
4.2.1. The KIE Project: human-resources
4.2.2. Building the Human Resources Example
4.2.3. Create a new Process Instance
4.3. Examples zip
II. jBPM Core
5. Core Engine API
5.1. Overview
5.2. KieBase
5.3. KieSession
5.3.1. ProcessRuntime
5.3.2. Event Listeners
5.3.3. Correlation Keys
5.3.4. Threads
5.4. RuntimeManager
5.4.1. Overview
5.4.2. Strategies
5.4.3. Usage
5.4.4. Configuration
5.5. Services
5.5.1. Deployment Service
5.5.2. Definition Service
5.5.3. Process Service
5.5.4. Runtime Data Service
5.5.5. User Task Service
5.5.6. Working with deployments
5.6. Configuration
6. Processes
6.1. What is BPMN 2.0
6.2. Process
6.2.1. Creating a process
6.3. Activities
6.3.1. Script task
6.3.2. Service task
6.3.3. User task
6.3.4. Reusable sub-process
6.3.5. Business rule task
6.3.6. Embedded sub-process
6.3.7. Multi-instance sub-process
6.4. Events
6.4.1. Start event
6.4.2. End events
6.4.3. Intermediate events
6.5. Gateways
6.5.1. Diverging gateway
6.5.2. Converging gateway
6.6. Others
6.6.1. Variables
6.6.2. Scripts
6.6.3. Constraints
6.6.4. Timers
6.7. Process Fluent API
6.7.1. Example
6.8. Testing
6.8.1. Unit testing
7. Human Tasks
7.1. Introduction
7.2. Using User Tasks in our Processes
7.2.1. Swimlanes
7.3. Data Mappings
7.4. Task Lifecycle
7.5. Task Permissions
7.5.1. Task Permissions Matrix
7.6. Task Service and The Process Engine
7.7. Task Service API
7.8. Interacting with the Task Service
8. Persistence and Transactions
8.1. Process Instance State
8.1.1. Runtime State
8.2. Audit Log
8.2.1. The jBPM Audit data model
8.2.2. Storing Process Events in a Database
8.2.3. Storing Process Events in a JMS queue for further processing
8.2.4. Variables auditing
8.3. Transactions
8.3.1. Container managed transactions
8.4. Configuration
8.4.1. Adding dependencies
8.4.2. Manually configuring the engine to use persistence
8.4.3. Configuring the engine to use persistence using JBPMHelper - for tests only
III. Workbench
9. Workbench
9.1. Installation
9.1.1. War installation
9.1.2. Workbench data
9.1.3. System properties
9.1.4. Trouble shooting
9.2. Quick Start
9.2.1. Add repository
9.2.2. Add project
9.2.3. Define Data Model
9.2.4. Define Rule
9.2.5. Build and Deploy
9.3. Administration
9.3.1. Administration overview
9.3.2. Organizational unit
9.3.3. Repositories
9.4. Configuration
9.4.1. User management
9.4.2. Roles
9.4.3. Restricting access to repositories
9.4.4. Command line config tool
9.5. Introduction
9.5.1. Log in and log out
9.5.2. Home screen
9.5.3. Workbench concepts
9.5.4. Initial layout
9.6. Changing the layout
9.6.1. Resizing
9.6.2. Repositioning
9.7. Authoring
9.7.1. Artifact Repository
9.7.2. Asset Editor
9.7.3. Tags Editor
9.7.4. Project Explorer
9.7.5. Project Editor
9.7.6. Validation
9.7.7. Data Modeller
9.7.8. Data Sets
9.8. Embedding Workbench In Your Application
9.9. Asset Management
9.9.1. Asset Management Overview
9.9.2. Managed vs Unmanaged Repositories
9.9.3. Asset Management Processes
9.9.4. Usage Flow
9.9.5. Repository Structure
9.9.6. Managed Repositories Operations
9.9.7. Remote APIs
10. Workbench Integration
10.1. REST
10.1.1. Job calls
10.1.2. Repository calls
10.1.3. Organizational unit calls
10.1.4. Maven calls
10.1.5. REST summary
11. Workbench High Availability
11.1.
11.1.1. VFS clustering
11.1.2. jBPM clustering
12. Designer
12.1. Designer UI Explained
12.2. Getting started with Modelling
12.3. Designer Toolbar
13. Forms
13.1. Configure process and human tasks
13.2. Generate forms from task definitions
13.3. Edit forms
13.3.1. Form generated description
13.3.2. Customizing form
13.3.3. Field types
13.4. Document attachments
13.4.1. Process and forms configuration
13.4.2. Marshalling strategy and deployment configuration
13.5. Using forms on client applications
13.5.1. What does the API provides?
13.5.2. Sample usage
14. Runtime Management
14.1. Deployments
14.1.1. Deployment descriptors
14.2. Process Deployments
14.3. Jobs
15. Process and Task Management
15.1. Process Management
15.1.1. Process Definitions
15.1.2. Process Instances
15.2. Tasks
15.2.1. Task List
15.2.2. New Task (Ad-Hoc Task)
16. Business Activity Monitoring
16.1. Overview
16.2. Business Dashboards
16.3. Process Dashboard
16.3.1. Task Dashboard
17. Remote API
17.1. Remote Java API
17.1.1. Remote REST Java API Client Configuration
17.1.2. Remote JMS Java API Client Configuration
17.1.3. Remote CommandWebService Java API Client Configuration
17.1.4. Supported methods
17.2. REST
17.2.1. REST permissions
17.2.2. Runtime calls
17.2.3. Task calls
17.2.4. Deployment Calls
17.2.5. Deployment call details
17.2.6. Execute calls
17.2.7. REST summary
17.3. REST Query API
17.3.1. Query URL layout
17.3.2. Query Parameters
17.3.3. Parameter Table
17.3.4. Parameter examples
17.3.5. Query Output Format
17.4. JMS
17.4.1. JMS Queue setup
17.4.2. Using the remote Java API
17.4.3. Example JMS usage
17.5. Additional Information
17.5.1. REST Serialization: JAXB or JSON
17.5.2. Sending and receiving user class instances
17.5.3. Including the deployment id
17.5.4. REST Pagination
17.5.5. REST Map query parameters
17.5.6. REST Number query parameters
17.5.7. Runtime strategies
IV. Eclipse
18. jBPM Eclipse Plugin
18.1. jBPM Eclipse Plugin
18.1.1. Installation
18.1.2. jBPM Project Wizard
18.1.3. New BPMN2 Process Wizard
18.1.4. jBPM Runtime
18.1.5. jBPM Maven Project Wizard
18.1.6. Drools Eclipse plugin
18.2. Debugging
18.2.1. The Process Instances View
18.2.2. The Audit View
18.3. Synchronizing with Workbench Repositories
18.3.1. Importing a workbench repository
18.3.2. Committing changes to the workbench
18.3.3. Updating from to the workbench
18.3.4. Working on individual projects
19. Eclipse BPMN 2.0 Modeler
19.1. Overview
19.2. Installation
19.3. Documentation
V. Integration
20. Integration
20.1. Maven
20.1.1. Maven artifacts as deployment units
20.1.2. Use Maven for dependency management
20.2. CDI
20.2.1. Overview
20.2.2. Configuring CDI integration
20.2.3. RuntimeManager as CDI bean
20.3. Spring
20.3.1. Direct use of Runtime Manager API
20.3.2. jBPM services with Spring
20.4. Ejb
20.4.1. Ejb services implementation
20.4.2. Local interface
20.4.3. Remote interface
20.5. OSGi
VI. Advanced Topics
21. Domain-specific Processes
21.1. Introduction
21.2. Overview
21.2.1. Work Item Definitions
21.2.2. Work Item Handlers
21.3. Example: Notifications
21.3.1. The Notification Work Item Definition
21.3.2. The NotificationWorkItemHandler
21.4. Service Repository
21.4.1. Public jBPM service repository
21.4.2. Setting up your own service repository
22. Exception Management
22.1. Overview
22.2. Introduction
22.3.
22.3.1. Technical Exceptions
22.3.2. Technical Exception Examples
22.4.
22.4.1. Business Exceptions
23. Flexible Processes
24. Concurrency and asynchronous execution
24.1. Concurrency
24.1.1. Engine execution
24.1.2. Multiple knowledge sessions and persistence
24.2. Asynchronous execution
24.2.1. Asynchronous handlers
24.2.2. jbpm executor
25. Release Notes
25.1. jBPM 6.4
25.1.1. New and Noteworthy in jBPM 6.4.0
25.1.2. New and Noteworthy in KIE Workbench 6.4.0
25.2. jBPM 6.3
25.2.1. New and Noteworthy in jBPM 6.3.0
25.2.2. New and Noteworthy in KIE Workbench 6.3.0
25.3. jBPM 6.2
25.3.1. New and Noteworthy in jBPM 6.2.0
25.3.2. New and Noteworthy in KIE Workbench 6.2.0
25.4. jBPM 6.1
25.4.1. New and Noteworthy in jBPM 6.1.0
25.4.2. New and Noteworthy in KIE Workbench 6.1.0
25.5. jBPM 6.0
25.5.1. New and Noteworthy in KIE API 6.0.0
25.5.2. New and Noteworthy in jBPM 6.0.0
25.5.3. New and Noteworthy in KIE Workbench 6.0.0
25.5.4. New and Noteworthy in Integration 6.0.0

Introduction and getting started with jBPM

jBPM is a flexible Business Process Management (BPM) Suite. It is light-weight, fully open-source (distributed under Apache license) and written in Java. It allows you to model, execute, and monitor business processes throughout their life cycle.

A business process allows you to model your business goals by describing the steps that need to be executed to achieve those goals, and the order of those goals are depicted using a flow chart. This process greatly improves the visibility and agility of your business logic. jBPM focuses on executable business processes, which are business processes that contain enough detail so they can actually be executed on a BPM engine. Executable business processes bridge the gap between business users and developers as they are higher-level and use domain-specific concepts that are understood by business users but can also be executed directly.

Business processes need to be supported throughout their entire life cycle: authoring, deployment, process management and task lists, and dashboards and reporting.

The core of jBPM is a light-weight, extensible workflow engine written in pure Java that allows you to execute business processes using the latest BPMN 2.0 specification. It can run in any Java environment, embedded in your application or as a service.

On top of the core engine, a lot of features and tools are offered to support business processes throughout their entire life cycle:

BPM creates the bridge between business analysts, developers and end users by offering process management features and tools in a way that both business users and developers like. Domain-specific nodes can be plugged into the palette, making the processes more easily understood by business users.

jBPM supports adaptive and dynamic processes that require flexibility to model complex, real-life situations that cannot easily be described using a rigid process. We bring control back to the end users by allowing them to control which parts of the process should be executed; this allows dynamic deviation from the process.

jBPM is not just an isolated process engine. Complex business logic can be modeled as a combination of business processes with business rules and complex event processing. jBPM can be combined with the Drools project to support one unified environment that integrates these paradigms where you model your business logic as a combination of processes, rules and events.


This figure gives an overview of the different components of the jBPM project.

  • The core engine is the heart of the project and allows you to execute business processes in a flexible manner. It is a pure Java component that you can choose to embed as part of your application or deploy it as a service and connect to it through the web-based UI or remote APIs.
    • An optional core service is the human task service that will take care of the human task life cycle if human actors participate in the process.
    • Another optional core service is runtime persistence; this will persist the state of all your process instances and log audit information about everything that is happening at runtime.
    • Applications can connect to the core engine by through its Java API or as a set of CDI services, but also remotely through a REST and JMS API.
  • Web-based tools allows you to model, simulate and deploy your processes and other related artifacts (like data models, forms, rules, etc.):
    • The process designer allows business users to design and simulate business processes in a web-based environment.
    • The data modeler allows non-technical users to view, modify and create data models for use in your processes.
    • A web-based form modeler also allows you to create, generate or edit forms related to your processes (to start the process or to complete one of the user tasks).
    • Rule authoring allows you to specify different types of business rules (decision tables, guided rules, etc.) for combination with your processes.
    • All assets are stored and managed on the Guvnor repository (exposed through Git) and can be managed (versioning), built and deployed.
  • The web-based management console allows business users to manage their runtime (manage business processes like start new processes, inspect running instances, etc.), to manage their task list and to perform Business Activity Monitoring (BAM) and see reports.
  • The Eclipse-based developer tools are an extension to the Eclipse IDE, targeted towards developers, and allows you to create business processes using drag and drop, test and debug your processes, etc.

Each of the components are described in more detail below.

The core jBPM engine is the heart of the project. It's a light-weight workflow engine that executes your business processes. It can be embedded as part of your application or deployed as a service (possibly on the cloud). Its most important features are the following:

The core engine can also be integrated with a few other (independent) core services:

All releases can be downloaded from SourceForge. Select the version you want to download and then select which artifact you want:

  • bin: all the jBPM binaries (JARs) and their dependencies

  • src: the sources of the core components

  • docs: the documentation

  • examples: some jBPM examples, can be imported into Eclipse

  • installer: the jbpm-installer, downloads and installs a demo setup of jBPM

  • installer-full: the jbpm-installer, downloads and installs a demo setup of jBPM, already contains a number of dependencies prepackages (so they don't need to be downloaded separately)

Here are a lot of useful links part of the jBPM community:

Please feel free to join us in our IRC channel at chat.freenode.net #jbpm. This is where most of the real-time discussion about the project takes place and where you can find most of the developers most of their time as well. Don't have an IRC client installed? Simply go to http://webchat.freenode.net/, input your desired nickname, and specify #jbpm. Then click login to join the fun.

We are often asked "How do I get involved". Luckily the answer is simple, just write some code and submit it :) There are no hoops you have to jump through or secret handshakes. We have a very minimal "overhead" that we do request to allow for scalable project development. Below we provide a general overview of the tools and "workflow" we request, along with some general advice.

If you contribute some good work, don't forget to blog about it :)

This script assumes you have Java JDK 1.6+ (set as JAVA_HOME), and Ant 1.7+ installed. If you don't, use the following links to download and install them:

Java: http://java.sun.com/javase/downloads/index.jsp

Ant: http://ant.apache.org/bindownload.cgi

Tip

To check whether Java and Ant are installed correctly, type the following commands inside a command prompt:

java -version

ant -version

This should return information about which version of Java and Ant you are currently using.

First of all, you need to download the installer and unzip it to your local file system. There are two versions

  • full installer - which already contains a lot of the dependencies that are necessary during the installation

  • minimal installer - which only contains the installer and will download all dependencies

In general, it is probably best to download the full installer: jBPM-{version}-installer-full.zip

You can also find the latest snapshot release here (only minimal installer) here:

https://hudson.jboss.org/jenkins/job/jBPM/lastSuccessfulBuild/artifact/jbpm-distribution/target/

The easiest way to get started is to simply run the installation script to install the demo setup. The demo install will setup all the web tooling (on top of WildFly) and Eclipse tooling in a pre-configured setup. Go into the jbpm-installer folder where you unzipped the installer and (from a command prompt) run:

ant install.demo

This will:

Running this command could take a while (REALLY, not kidding, we are for example downloading an Eclipse installation, even if you downloaded the full installer, specifically for your operating system).

Once the demo setup has finished, you can start playing with the various components by starting the demo setup:

ant start.demo

This will:

Now wait until the process management console comes up:

http://localhost:8080/jbpm-console

Note

It could take a minute to start up the application server and web application. If the web page doesn't show up after a while, make sure you don't have a firewall blocking that port, or another application already using the port 8080. You can always take a look at the server log jbpm-installer/wildfly-8.1.0.Final/standalone/log/server.log

Finally, if you also want to use the DashBuilder for reporting (which is implemented as a separate war), you can now also install this:

ant install.dashboard.into.jboss

Once everything is started, you can start playing with the Eclipse and web tooling, as explained in the following sections.

If you only want to try out the web tooling and do not wish to download and install the Eclipse tooling, you can use these alternative commands:

ant install.demo.noeclipse
ant start.demo.noeclipse

Similarly, if you only want to try out the Eclipse tooling and do not wish to download and install the web tooling, you can use these alternative commands:

ant install.demo.eclipse
ant start.demo.eclipse

Now continue with the 10-minute tutorials. Once you're done playing and you want to shut down the demo setup, you can use:

ant stop.demo

If at any point in time would like to start over with a clean demo setup - meaning all changes you did inside the web tooling and/or saved in the database will be lost, you can run the following command (after which you can run the installer again from scratch, note that this cannot be undone):

ant clean.demo

Open up the process management console:

http://localhost:8080/jbpm-console

Note

It could take a minute to start up the AS and web application. If the web page doesn't show up after a while, make sure you don't have a firewall blocking that port, or another application already using the port 8080. You can always take a look at the server log jbpm-installer/jboss-as-7.1.1.Final/standalone/log/server.log

Log in, using krisv / krisv as username / password.

Using a prebuilt Evaluation example, the following screencast gives an overview of how to manage your process instances. It shows you:

  • How to build and deploy a process
  • How to start a new process instance
  • How to look up the current status of a running process instance
  • How to look up your tasks
  • How to complete a task
  • How to generate reports to monitor your process execution

Figure 3.1. 


The workbench supports the entire life cycle of your business processes: authoring, deployment, process management, tasks and dashboards.

  • The project authoring perspective allows you to look at existing repositories, where each project can contain business processes (but also business rules, data models, forms, etc.). By default, the workbench will download two sample playground repositories, containing examples to look at.
    • In this screencast, the Evaluation project inside the jbpm-playground repository is used.
  • The project explorer shows all available artefacts:
    • evaluation: business process describing the evaluation process as a sequence of tasks
    • evaluation-taskform: process form to start the evaluation process
    • PerformanceEvaluation-taskform: task form to perform the evaluation tasks
  • To make a process available for execution, you need to successfully build and deploy it first. To do so, open up the Project Editor (from the Tools menu) and click Build & Deploy.
  • To manage your process definitions and instances, click on the "Process Management" menu option at the top menu bar an select one of available options depending on you interest:
    • Process Definitions - lists all available process definitions
    • Process Instances - lists all active process instances (allows to show completed, aborted as well by changing filter criteria)
  • Process definitions panel allow you to start a new process instance by clicking on the "Play" button. The process form (as defined in the project) will be shown, where you need to fill in the necessary information to start the process. In this case, you need to fill the user you want to start an evaluation for (in this case use "krisv") and a reason for the request, after which you can complete the form. Some details about the process instance that was just started will be shown in the process instance details panel. From there you can access additional details:
    • Process model - to visualize current state of the process
    • Process variables - to see current values of process variables
    The process instance that you just started is first requiring a self-evaluation of the user and is waiting until the user has completed this task.
  • To see the tasks that have been assigned to you, choose the "Tasks" menu option on the top bar and select "Task List" (you may need to click refresh to update your task view). The personal tasks table should show a "Performance Evaluation" task reserved for you. After starting the task, you can complete the task, which will open up the task form related to this task. You can fill in the necessary data and then complete the form and close the window. After completing the task, you could check the "Process Instances" once more to check the progress of your process instance. You should be able to see that the process is now waiting for your HR manager and project manager to also perform an evaluation. You could log in as "john" / "john" and "mary" / "mary" to complete these tasks.
  • After starting and/or completing a few process instances and human tasks, you can generate a report of what has happened so far. Under "Dashboards", select "Process & Task Dashboard". This is a set of see predefined charts that allow users to spot what is going on in the system. Charts can be fully customized as well, as explained in the Business Activity Monitoring chapter.

The following screencast gives an overview of how to use the Eclipse tooling. It shows you:

  • How to import and execute the evaluation sample project
    • Import the evaluation project (included in the jbpm-installer)
    • Open the Evaluation.bpmn process
    • Open the com.sample.ProcessTest Java class
    • Execute the ProcessTest class to run the process
  • How to create a new jBPM project (including sample process and JUnit test)

Figure 3.2. 


You can import the evaluation project - a sample included in the jbpm-installer - by selecting "File -> Import ...", select "Existing Projects into Workspace" and browse to the jbpm-installer/sample/evaluation folder and click "Finish". You can open up the evaluation process and the ProcessTest class. To execute the class, right-click on it and select "Run as ... - Java Application". The console should show how the process was started and how the different actors in the process completed the tasks assigned to them, to complete the process instance.

You could also create a new project using the jBPM project wizard. The sample projects contain a process and an associated Java file to start the process. Select "File - New ... - Project ..." and under the "jBPM" category, select "jBPM project" and click "Next". Give the project a name and click "Next". You can choose from a simple HelloWorld example or a slightly more advanced example using persistence and human tasks. If you select the latter and click Finish, you should see a new project containing a "sample.bpmn" process and a "com.sample.ProcessTest" JUnit test class. You can open the BPMN2 process by double-clicking it. To execute the process, right-click on ProcessTest.java and select "Run As - Java Application".

The following files define the persistence settings for the jbpm-installer demo:

Do the following:

If you decide to use a different database with this demo, you need to remember the following when going through the steps above:

jBPM installer ant script performs most of the work automatically and usually does not require additional attention but in case it does, here is a list of available targets that might be needed to perform some of the steps manually.

Table 3.3. jBPM installer available targets

TargetDescription
clean.dbcleans up database used by jBPM demo (applies only to H2 database)
clean.democleans up entire installation so new installation can be performed
clean.demo.noeclipsesame as clean.demo but does not remove Eclipse
clean.eclipseremoves Eclipse and its workspace
clean.generated.ddlremoves DDL scripts generated if any
clean.jbossremoves application server with all its deployments
clean.jboss.repositoryremoves repository content for demo setup (guvnor Maven repo, niogit, etc)
download.dashboarddownloads jBPM dashboard component (BAM)
download.db.driverdownloads DB driver configured in build.properties
download.ddl.dependenciesdownloads all dependencies required to run DDL script generation tool
download.droolsjbpm.eclipsedownloads Drools and jBPM Eclipse plugin
download.eclipsedownloads Eclipse distribution
download.jbossdownloads JBoss Application Server
download.jBPM.bindownloads jBPM binary distribution (jBPM libs and its dependencies)
download.jBPM.consoledownloads jBPM console for JBoss AS
install.dashboard.into.jbossinstalls jBPM dashboard into JBoss AS
install.db.filesinstalls DB driver as JBoss module
install.demoinstalls complete demo environment
install.demo.eclipseinstalls Eclipse with all jBPM plugins, no server installation
install.demo.noeclipsesimilar to install.demo but skips Eclipse installation
install.dependenciesinstalls custom libraries (such as work item handlers, etc) into the jBPM console
install.droolsjbpm-eclipse.into.eclipseinstalls droolsjbpm Eclipse plugin into Eclipse
install.eclipseinstall Eclipse IDE
install.jbossinstalls JBoss AS
install.jBPM-console.into.jbossinstalls jBPM console application into JBoss AS

Some common issues are explained below.

Q: What if the installer complains it cannot download component X?

A: Are you connected to the Internet? Do you have a firewall turned on? Do you require a proxy? It might be possible that one of the locations we're downloading the components from is temporarily offline. Try downloading the components manually (possibly from alternate locations) and put them in the jbpm-installer/lib folder.

Q: What if the installer complains it cannot extract / unzip a certain JAR/WAR/zip?

A: If your download failed while downloading a component, it is possible that the installer is trying to use an incomplete file. Try deleting the component in question from the jbpm-installer/lib folder and reinstall, so it will be downloaded again.

Q: What if I have been changing my installation (and it no longer works) and I want to start over again with a clean installation?

A: You can use ant clean.demo to remove all the installed components, so you end up with a fresh installation again.

Q: I sometimes see exceptions when trying to stop or restart certain services, what should I do?

A: If you see errors during shutdown, are you sure the services were still running? If you see exceptions during restart, are you sure the service you started earlier was successfully shutdown? Maybe try killing the services manually if necessary.

Q: Something seems to be going wrong when running Eclipse but I have no idea what. What can I do?

A: Always check the consoles for output like error messages or stack traces. You can also check the Eclipse Error Log for exceptions. Try adding an audit logger to your session to figure out what's happening at runtime, or try debugging your application.

Q: Something seems to be going wrong when running the a web-based application like the jbpm-console. What can I do?

A: You can check the server log for possible exceptions: jbpm-installer/jboss-as-{version}/standalone/log/server.log (for JBoss AS7).

For all other questions, try contacting the jBPM community as described in the Getting Started chapter.

This chapter provides a walkthrough of a pre-made Red Hat JBoss BPM Suite example project. In this chapter, you will:

The project, together with many other example projects, can be imported through github. To learn how to import projects through github, see the JBoss BPM Suite Getting Started Guide.

The web-based workbench by default will install two sample repositories that contain various sample projects that help you getting started. This section shows different examples that can be found in the jbpm-playground repository (also available here: https://github.com/droolsjbpm/jbpm-playground). All these examples are high level and business oriented.

If you want to contribute with these examples please get in touch with any member of the jBPM/Drools Team.

The Human Resource Example's use case can be described as follows: A company wants to hire new developers. In this process, three departments (that is the Human resources, IT, and Accounting) are involved. These departments are represented by three users: Katy, Jack, and John respectively.


Note that only four out of the six defined activities within the business process are User Tasks. User Tasks require human interaction. The other two tasks are Service Tasks, which are automated and connected to other systems.

Each instance of the process will follow certain actions:

  • The human resources team performs the initial interview with the candidate

  • The IT department team performs the technical interview

  • Based on the output from the previous two steps, the accounting team creates a job proposal

  • When the proposal has been drafted, it is automatically sent to the candidate via email

  • If the candidate accepts the proposal, a new meeting to sign the contract is scheduled

  • Finally, if the candidate accepts the proposal, the system posts a message about the new hire using Twitter service connector

Note, that Jack, John, and Katy may represent any employee within the company with appropriate role assigned.

Using the jBPM Core Engine

Table of Contents

5. Core Engine API
5.1. Overview
5.2. KieBase
5.3. KieSession
5.3.1. ProcessRuntime
5.3.2. Event Listeners
5.3.3. Correlation Keys
5.3.4. Threads
5.4. RuntimeManager
5.4.1. Overview
5.4.2. Strategies
5.4.3. Usage
5.4.4. Configuration
5.5. Services
5.5.1. Deployment Service
5.5.2. Definition Service
5.5.3. Process Service
5.5.4. Runtime Data Service
5.5.5. User Task Service
5.5.6. Working with deployments
5.6. Configuration
6. Processes
6.1. What is BPMN 2.0
6.2. Process
6.2.1. Creating a process
6.3. Activities
6.3.1. Script task
6.3.2. Service task
6.3.3. User task
6.3.4. Reusable sub-process
6.3.5. Business rule task
6.3.6. Embedded sub-process
6.3.7. Multi-instance sub-process
6.4. Events
6.4.1. Start event
6.4.2. End events
6.4.3. Intermediate events
6.5. Gateways
6.5.1. Diverging gateway
6.5.2. Converging gateway
6.6. Others
6.6.1. Variables
6.6.2. Scripts
6.6.3. Constraints
6.6.4. Timers
6.7. Process Fluent API
6.7.1. Example
6.8. Testing
6.8.1. Unit testing
7. Human Tasks
7.1. Introduction
7.2. Using User Tasks in our Processes
7.2.1. Swimlanes
7.3. Data Mappings
7.4. Task Lifecycle
7.5. Task Permissions
7.5.1. Task Permissions Matrix
7.6. Task Service and The Process Engine
7.7. Task Service API
7.8. Interacting with the Task Service
8. Persistence and Transactions
8.1. Process Instance State
8.1.1. Runtime State
8.2. Audit Log
8.2.1. The jBPM Audit data model
8.2.2. Storing Process Events in a Database
8.2.3. Storing Process Events in a JMS queue for further processing
8.2.4. Variables auditing
8.3. Transactions
8.3.1. Container managed transactions
8.4. Configuration
8.4.1. Adding dependencies
8.4.2. Manually configuring the engine to use persistence
8.4.3. Configuring the engine to use persistence using JBPMHelper - for tests only

This chapter introduces the API you need to load processes and execute them. For more detail on how to define the processes themselves, check out the chapter on BPMN 2.0.

To interact with the process engine (for example, to start a process), you need to set up a session. This session will be used to communicate with the process engine. A session needs to have a reference to a knowledge base, which contains a reference to all the relevant process definitions. This knowledge base is used to look up the process definitions whenever necessary. To create a session, you first need to create a knowledge base, load all the necessary process definitions (this can be from various sources, like from classpath, file system or process repository) and then instantiate a session.

Once you have set up a session, you can use it to start executing processes. Whenever a process is started, a new process instance is created (for that process definition) that maintains the state of that specific instance of the process.

For example, imagine you are writing an application to process sales orders. You could then define one or more process definitions that define how the order should be processed. When starting up your application, you first need to create a knowledge base that contains those process definitions. You can then create a session based on this knowledge base so that, whenever a new sales order comes in, a new process instance is started for that sales order. That process instance contains the state of the process for that specific sales request.

A knowledge base can be shared across sessions and usually is only created once, at the start of the application (as creating a knowledge base can be rather heavy-weight as it involves parsing and compiling the process definitions). Knowledge bases can be dynamically changed (so you can add or remove processes at runtime).

Sessions can be created based on a knowledge base and are used to execute processes and interact with the engine. You can create as many independent session as you need and creating a session is considered relatively lightweight. How many sessions you create is up to you. In general, most simple cases start out with creating one session that is then called from various places in your application. You could decide to create multiple sessions if for example you want to have multiple independent processing units (for example, if you want all processes from one customer to be completely independent from processes for another customer, you could create an independent session for each customer) or if you need multiple sessions for scalability reasons. If you don't know what to do, simply start by having one knowledge base that contains all your process definitions and create one session that you then use to execute all your processes.

The jBPM project has a clear separation between the API the users should be interacting with and the actual implementation classes. The public API exposes most of the features we believe "normal" users can safely use and should remain rather stable across releases. Expert users can still access internal classes but should be aware that they should know what they are doing and that the internal API might still change in the future.

As explained above, the jBPM API should thus be used to (1) create a knowledge base that contains your process definitions, and to (2) create a session to start new process instances, signal existing ones, register listeners, etc.

Once you've loaded your knowledge base, you should create a session to interact with the engine. This session can then be used to start new processes, signal events, etc. The following code snippet shows how easy it is to create a session based on the previously created knowledge base, and to start a process (by id).


KieSession ksession = kbase.newKieSession();
ProcessInstance processInstance = ksession.startProcess("com.sample.MyProcess");

The ProcessRuntime interface defines all the session methods for interacting with processes, as shown below.


  /**
	 * Start a new process instance.  The process (definition) that should
	 * be used is referenced by the given process id.
	 * 
	 * @param processId  The id of the process that should be started
	 * @return the ProcessInstance that represents the instance of the process that was started
	 */
    ProcessInstance startProcess(String processId);

    /**
	 * Start a new process instance.  The process (definition) that should
	 * be used is referenced by the given process id.  Parameters can be passed
	 * to the process instance (as name-value pairs), and these will be set
	 * as variables of the process instance. 
     * 
	 * @param processId  the id of the process that should be started
     * @param parameters  the process variables that should be set when starting the process instance 
	 * @return the ProcessInstance that represents the instance of the process that was started
     */
    ProcessInstance startProcess(String processId,
                                 Map<String, Object> parameters);

    /**
     * Signals the engine that an event has occurred. The type parameter defines
     * which type of event and the event parameter can contain additional information
     * related to the event.  All process instances that are listening to this type
     * of (external) event will be notified.  For performance reasons, this type of event
     * signaling should only be used if one process instance should be able to notify
     * other process instances. For internal event within one process instance, use the
     * signalEvent method that also include the processInstanceId of the process instance
     * in question. 
     * 
     * @param type the type of event
     * @param event the data associated with this event
     */
    void signalEvent(String type,
                     Object event);

    /**
     * Signals the process instance that an event has occurred. The type parameter defines
     * which type of event and the event parameter can contain additional information
     * related to the event.  All node instances inside the given process instance that
     * are listening to this type of (internal) event will be notified.  Note that the event
     * will only be processed inside the given process instance.  All other process instances
     * waiting for this type of event will not be notified.
     * 
     * @param type the type of event
     * @param event the data associated with this event
     * @param processInstanceId the id of the process instance that should be signaled
     */
    void signalEvent(String type,
                     Object event,
                     long processInstanceId);

    /**
     * Returns a collection of currently active process instances.  Note that only process
     * instances that are currently loaded and active inside the engine will be returned.
     * When using persistence, it is likely not all running process instances will be loaded
     * as their state will be stored persistently.  It is recommended not to use this
     * method to collect information about the state of your process instances but to use
     * a history log for that purpose.
     * 
     * @return a collection of process instances currently active in the session
     */
    Collection<ProcessInstance> getProcessInstances();

    /**
     * Returns the process instance with the given id.  Note that only active process instances
     * will be returned.  If a process instance has been completed already, this method will return
     * null.
     * 
     * @param id the id of the process instance
     * @return the process instance with the given id or null if it cannot be found
     */
    ProcessInstance getProcessInstance(long processInstanceId);

    /**
     * Aborts the process instance with the given id.  If the process instance has been completed
     * (or aborted), or the process instance cannot be found, this method will throw an
     * IllegalArgumentException.
     * 
     * @param id the id of the process instance
     */
    void abortProcessInstance(long processInstanceId);

    /**
     * Returns the WorkItemManager related to this session.  This can be used to
     * register new WorkItemHandlers or to complete (or abort) WorkItems.
     * 
     * @return the WorkItemManager related to this session
     */
    WorkItemManager getWorkItemManager();

The session provides methods for registering and removing listeners. A ProcessEventListener can be used to listen to process-related events, like starting or completing a process, entering and leaving a node, etc. Below, the different methods of the ProcessEventListener class are shown. An event object provides access to related information, like the process instance and node instance linked to the event. You can use this API to register your own event listeners.

public interface ProcessEventListener {

  void beforeProcessStarted( ProcessStartedEvent event );
  void afterProcessStarted( ProcessStartedEvent event );
  void beforeProcessCompleted( ProcessCompletedEvent event );
  void afterProcessCompleted( ProcessCompletedEvent event );
  void beforeNodeTriggered( ProcessNodeTriggeredEvent event );
  void afterNodeTriggered( ProcessNodeTriggeredEvent event );
  void beforeNodeLeft( ProcessNodeLeftEvent event );
  void afterNodeLeft( ProcessNodeLeftEvent event );
  void beforeVariableChanged(ProcessVariableChangedEvent event);
  void afterVariableChanged(ProcessVariableChangedEvent event);

}

A note about before and after events: these events typically act like a stack, which means that any events that occur as a direct result of the previous event, will occur between the before and the after of that event. For example, if a subsequent node is triggered as result of leaving a node, the node triggered events will occur inbetween the beforeNodeLeftEvent and the afterNodeLeftEvent of the node that is left (as the triggering of the second node is a direct result of leaving the first node). Doing that allows us to derive cause relationships between events more easily. Similarly, all node triggered and node left events that are the direct result of starting a process will occur between the beforeProcessStarted and afterProcessStarted events. In general, if you just want to be notified when a particular event occurs, you should be looking at the before events only (as they occur immediately before the event actually occurs). When only looking at the after events, one might get the impression that the events are fired in the wrong order, but because the after events are triggered as a stack (after events will only fire when all events that were triggered as a result of this event have already fired). After events should only be used if you want to make sure that all processing related to this has ended (for example, when you want to be notified when starting of a particular process instance has ended.

Also note that not all nodes always generate node triggered and/or node left events. Depending on the type of node, some nodes might only generate node left events, others might only generate node triggered events. Catching intermediate events for example are not generating triggered events (they are only generating left events, as they are not really triggered by another node, rather activated from outside). Similarly, throwing intermediate events are not generating left events (they are only generating triggered events, as they are not really left, as they have no outgoing connection).

jBPM out-of-the-box provides a listener that can be used to create an audit log (either to the console or the a file on the file system). This audit log contains all the different events that occurred at runtime so it's easy to figure out what happened. Note that these loggers should only be used for debugging purposes. The following logger implementations are supported by default:

The KieServices lets you add a KieRuntimeLogger to your session, as shown below. When creating a console logger, the knowledge session for which the logger needs to be created must be passed as an argument. The file logger also requires the name of the log file to be created, and the threaded file logger requires the interval (in milliseconds) after which the events should be saved. You should always close the logger at the end of your application.


  import org.kie.api.KieServices;
  import org.kie.api.logger.KieRuntimeLogger;
  ...
  KieRuntimeLogger logger = KieServices.Factory.get().getLoggers().newFileLogger(ksession, "test");
  // add invocations to the process engine here,
  // e.g. ksession.startProcess(processId);
  ...
  logger.close();

The log file that is created by the file-based loggers contains an XML-based overview of all the events that occurred at runtime. It can be opened in Eclipse, using the Audit View in the Drools Eclipse plugin, where the events are visualized as a tree. Events that occur between the before and after event are shown as children of that event. The following screenshot shows a simple example, where a process is started, resulting in the activation of the Start node, an Action node and an End node, after which the process was completed.

A common requirement when working with processes is ability to assign a given process instance some sort of business identifier that can be later on referenced without knowing the actual (generated) id of the process instance. To provide such capabilities, jBPM allows to use CorrelationKey that is composed of CorrelationProperties. CorrelationKey can have either single property describing it (which is in most cases) but it can be represented as multi valued properties set.

Correlation capabilities are provided as part of interface

CorrelationAwareProcessRuntime

that exposes following methods:


      /**
      * Start a new process instance.  The process (definition) that should
      * be used is referenced by the given process id.  Parameters can be passed
      * to the process instance (as name-value pairs), and these will be set
      * as variables of the process instance.
      *
      * @param processId  the id of the process that should be started
      * @param correlationKey custom correlation key that can be used to identify process instance
      * @param parameters  the process variables that should be set when starting the process instance
      * @return the ProcessInstance that represents the instance of the process that was started
      */
      ProcessInstance startProcess(String processId, CorrelationKey correlationKey, Map<String, Object> parameters);

      /**
      * Creates a new process instance (but does not yet start it).  The process
      * (definition) that should be used is referenced by the given process id.
      * Parameters can be passed to the process instance (as name-value pairs),
      * and these will be set as variables of the process instance.  You should only
      * use this method if you need a reference to the process instance before actually
      * starting it.  Otherwise, use startProcess.
      *
      * @param processId  the id of the process that should be started
      * @param correlationKey custom correlation key that can be used to identify process instance
      * @param parameters  the process variables that should be set when creating the process instance
      * @return the ProcessInstance that represents the instance of the process that was created (but not yet started)
      */
      ProcessInstance createProcessInstance(String processId, CorrelationKey correlationKey, Map<String, Object> parameters);

      /**
      * Returns the process instance with the given correlationKey.  Note that only active process instances
      * will be returned.  If a process instance has been completed already, this method will return
      * null.
      *
      * @param correlationKey the custom correlation key assigned when process instance was created
      * @return the process instance with the given id or null if it cannot be found
      */
      ProcessInstance getProcessInstance(CorrelationKey correlationKey);
    

Correlation is usually used with long running processes and thus require persistence to be enabled to be able to permanently store correlation information.

In the following text, we will refer to two types of "multi-threading": logical and technical. Technical multi-threading is what happens when multiple threads or processes are started on a computer, for example by a Java or C program. Logical multi-threading is what we see in a BPM process after the process reaches a parallel gateway, for example. From a functional standpoint, the original process will then split into two processes that are executed in a parallel fashion.

Of course, the jBPM engine supports logical multi-threading: for example, processes that include a parallel gateway. We've chosen to implement logical multi-threading using one thread: a jBPM process that includes logical multi-threading will only be executed in one technical thread. The main reason for doing this is that multiple (technical) threads need to be be able to communicate state information with each other if they are working on the same process. This requirement brings with it a number of complications. While it might seem that multi-threading would bring performance benefits with it, the extra logic needed to make sure the different threads work together well means that this is not guaranteed. There is also the extra overhead incurred because we need to avoid race conditions and deadlocks.

In general, the jBPM engine executes actions in serial. For example, when the engine encounters a script task in a process, it will synchronously execute that script and wait for it to complete before continuing execution. Similarly, if a process encounters a parallel gateway, it will sequentially trigger each of the outgoing branches, one after the other. This is possible since execution is almost always instantaneous, meaning that it is extremely fast and produces almost no overhead. As a result, the user will usually not even notice this. Similarly, action scripts in a process are also synchronously executed, and the engine will wait for them to finish before continuing the process. For example, doing a Thread.sleep(...) as part of a script will not make the engine continue execution elsewhere but will block the engine thread during that period.

The same principle applies to service tasks. When a service task is reached in a process, the engine will also invoke the handler of this service synchronously. The engine will wait for the completeWorkItem(...) method to return before continuing execution. It is important that your service handler executes your service asynchronously if its execution is not instantaneous.

An example of this would be a service task that invokes an external service. Since the delay in invoking this service remotely and waiting for the results might be too long, it might be a good idea to invoke this service asynchronously. This means that the handler will only invoke the service and will notify the engine later when the results are available. In the mean time, the process engine then continues execution of the process.

Human tasks are a typical example of a service that needs to be invoked asynchronously, as we don't want the engine to wait until a human actor has responded to the request. The human task handler will only create a new task (on the task list of the assigned actor) when the human task node is triggered. The engine will then be able to continue execution on the rest of the process (if necessary) and the handler will notify the engine asynchronously when the user has completed the task.

RuntimeManager has been introduced to simplify and empower usage of knowledge API especially in context of processes. It provides configurable strategies that control actual runtime execution (how KieSessions are provided) and by default provides following:

Runtime Manager is primary responsible for managing and delivering instances of RuntimeEngine to the caller. In turn, RuntimeEngine encapsulates two the most important elements of jBPM engine:

Both of these components are already configured to work with each other smoothly without additional configuration from end user. No more need to register human task handler or keeping track if it's connected to the service or not.

public interface RuntimeManager {

	/**
	 * Returns <code>RuntimeEngine</code> instance that is fully initialized:
	 * <ul>
	 * 	<li>KiseSession is created or loaded depending on the strategy</li>
	 * 	<li>TaskService is initialized and attached to ksession (via listener)</li>
	 * 	<li>WorkItemHandlers are initialized and registered on ksession</li>
	 * 	<li>EventListeners (process, agenda, working memory) are initialized and added to ksession</li>
	 * </ul>
	 * @param context the concrete implementation of the context that is supported by given <code>RuntimeManager</code>
	 * @return instance of the <code>RuntimeEngine</code>
	 */
    RuntimeEngine getRuntimeEngine(Context<?> context);
    
    /**
     * Unique identifier of the <code>RuntimeManager</code>
     * @return
     */
    String getIdentifier();
   
    /**
     * Disposes <code>RuntimeEngine</code> and notifies all listeners about that fact.
     * This method should always be used to dispose <code>RuntimeEngine</code> that is not needed
     * anymore. <br/>
     * ksession.dispose() shall never be used with RuntimeManager as it will break the internal
     * mechanisms of the manager responsible for clear and efficient disposal.<br/>
     * Dispose is not needed if <code>RuntimeEngine</code> was obtained within active JTA transaction, 
     * this means that when getRuntimeEngine method was invoked during active JTA transaction then dispose of
     * the runtime engine will happen automatically on transaction completion.
     * @param runtime
     */
    void disposeRuntimeEngine(RuntimeEngine runtime);
    
    /**
     * Closes <code>RuntimeManager</code> and releases its resources. Shall always be called when
     * runtime manager is not needed any more. Otherwise it will still be active and operational.
     */
    void close();
    
}

RuntimeEngine interface provides the most important methods to get access to engine components:

public interface RuntimeEngine {

	/**
	 * Returns <code>KieSession</code> configured for this <code>RuntimeEngine</code>
	 * @return
	 */
    KieSession getKieSession();
    
    /**
	 * Returns <code>TaskService</code> configured for this <code>RuntimeEngine</code>
	 * @return
	 */
    TaskService getTaskService();   
}

RuntimeManager will ensure that regardless of the strategy it will provide same capabilities when it comes to initialization and configuration of the RuntimeEngine. That means

On the other hand, RuntimeManager maintains the engine disposal as well by providing dedicated methods to dispose RuntimeEngine when it's no more needed to release any resources it might have acquired.

Singleton strategy - instructs RuntimeManager to maintain single instance of RuntimeEngine (and in turn single instance of KieSession and TaskService). Access to the RuntimeEngine is synchronized and by that thread safe although it comes with a performance penalty due to synchronization. This strategy is similar to what was available by default in jBPM version 5.x and it's considered easiest strategy and recommended to start with.

It has following characteristics that are important to evaluate while considering it for given scenario:

Per request strategy - instructs RuntimeManager to provide new instance of RuntimeEngine for every request. As request RuntimeManager will consider one or more invocations within single transaction. It must return same instance of RuntimeEngine within single transaction to ensure correctness of state as otherwise operation done in one call would not be visible in the other. This is sort of "stateless" strategy that provides only request scope state and once request is completed RuntimeEngine will be permanently destroyed - KieSession information will be removed from the database in case persistence was used.

It has following characteristics:

Per process instance strategy - instructs RuntimeManager to maintain a strict relationship between KieSession and ProcessInstance. That means that KieSession will be available as long as the ProcessInstance that it belongs to is active. This strategy provides the most flexible approach to use advanced capabilities of the engine like rule evaluation in isolation (for given process instance only), maximum performance and reduction of potential bottlenecks intriduced by synchronization; and at the same time reduces number of KieSessions to the actual number of process instances rather than number of requests (in contrast to per request strategy).

It has following characteristics:

Regular usage scenario for RuntimeManager is:

Here is how you can build RuntimeManager and get RuntimeEngine (that encapsulates KieSession and TaskService) from it:


    // first configure environment that will be used by RuntimeManager
    RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
    .newDefaultInMemoryBuilder()
    .addAsset(ResourceFactory.newClassPathResource("BPMN2-ScriptTask.bpmn2"), ResourceType.BPMN2)
    .get();

    // next create RuntimeManager - in this case singleton strategy is chosen
    RuntimeManager manager = RuntimeManagerFactory.Factory.get().newSingletonRuntimeManager(environment);

    // then get RuntimeEngine out of manager - using empty context as singleton does not keep track
    // of runtime engine as there is only one
    RuntimeEngine runtime = manager.getRuntimeEngine(EmptyContext.get());

    // get KieSession from runtime runtimeEngine - already initialized with all handlers, listeners, etc that were configured
    // on the environment
    KieSession ksession = runtimeEngine.getKieSession();

    // add invocations to the process engine here,
    // e.g. ksession.startProcess(processId);

    // and last dispose the runtime engine
    manager.disposeRuntimeEngine(runtimeEngine);
  

This example provides simplest (minimal) way of using RuntimeManager and RuntimeEngine although it provides few quite valuable information:

The complexity of knowing when to create, dispose, register handlers, etc is taken away from the end user and moved to the runtime manager that knows when/how to perform such operations but still allows to have a fine grained control over this process by providing comprehensive configuration of the RuntimeEnvironment.


  public interface RuntimeEnvironment {

	/**
	 * Returns <code>KieBase</code> that shall be used by the manager
	 * @return
	 */
    KieBase getKieBase();
    
    /**
     * KieSession environment that shall be used to create instances of <code>KieSession</code>
     * @return
     */
    Environment getEnvironment();
    
    /**
     * KieSession configuration that shall be used to create instances of <code>KieSession</code>
     * @return
     */
    KieSessionConfiguration getConfiguration();
    
    /**
     * Indicates if persistence shall be used for the KieSession instances
     * @return
     */
    boolean usePersistence();
    
    /**
     * Delivers concrete implementation of <code>RegisterableItemsFactory</code> to obtain handlers and listeners
     * that shall be registered on instances of <code>KieSession</code>
     * @return
     */
    RegisterableItemsFactory getRegisterableItemsFactory();
    
    /**
     * Delivers concrete implementation of <code>UserGroupCallback</code> that shall be registered on instances 
     * of <code>TaskService</code> for managing users and groups.
     * @return
     */
    UserGroupCallback getUserGroupCallback();
    
    /**
     * Delivers custom class loader that shall be used by the process engine and task service instances
     * @return
     */
    ClassLoader getClassLoader();
    
    /**
     * Closes the environment allowing to close all depending components such as ksession factories, etc 
     */
    void close();
  

While RuntimeEnvironment interface provides mostly access to data kept as part of the environment and will be used by the RuntimeManager, users should take advantage of builder style class that provides fluent API to configure RuntimeEnvironment with predefined settings.

public interface RuntimeEnvironmentBuilder {

	public RuntimeEnvironmentBuilder persistence(boolean persistenceEnabled);

	public RuntimeEnvironmentBuilder entityManagerFactory(Object emf);

	public RuntimeEnvironmentBuilder addAsset(Resource asset, ResourceType type);

	public RuntimeEnvironmentBuilder addEnvironmentEntry(String name, Object value);

	public RuntimeEnvironmentBuilder addConfiguration(String name, String value);

	public RuntimeEnvironmentBuilder knowledgeBase(KieBase kbase);

	public RuntimeEnvironmentBuilder userGroupCallback(UserGroupCallback callback);

	public RuntimeEnvironmentBuilder registerableItemsFactory(RegisterableItemsFactory factory);

	public RuntimeEnvironment get();

	public RuntimeEnvironmentBuilder classLoader(ClassLoader cl);
	
	public RuntimeEnvironmentBuilder schedulerService(Object globalScheduler); 
    
  

Instances of the RuntimeEnvironmentBuilder can be obtained via RuntimeEnvironmentBuilderFactory that provides preconfigured sets of builder to simplify and help users to build the environment for the RuntimeManager.

public interface RuntimeEnvironmentBuilderFactory {

	/**
     * Provides completely empty <code>RuntimeEnvironmentBuilder</code> instance that allows to manually
     * set all required components instead of relying on any defaults.
     * @return new instance of <code>RuntimeEnvironmentBuilder</code>
     */
    public RuntimeEnvironmentBuilder newEmptyBuilder();
    
    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     * 
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newDefaultBuilder();
    
    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * but it does not have persistence for process engine configured so it will only store process instances in memory
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     * 
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newDefaultInMemoryBuilder();
    
    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * This one is tailored to works smoothly with kjars as the notion of kbase and ksessions
     * @param groupId group id of kjar
     * @param artifactId artifact id of kjar
     * @param version version number of kjar
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     * 
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newDefaultBuilder(String groupId, String artifactId, String version);
    
    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * This one is tailored to works smoothly with kjars as the notion of kbase and ksessions
     * @param groupId group id of kjar
     * @param artifactId artifact id of kjar
     * @param version version number of kjar
     * @param kbaseName name of the kbase defined in kmodule.xml stored in kjar
     * @param ksessionName name of the ksession define in kmodule.xml stored in kjar
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     * 
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newDefaultBuilder(String groupId, String artifactId, String version, String kbaseName, String ksessionName);
    
    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * This one is tailored to works smoothly with kjars as the notion of kbase and ksessions
     * @param releaseId <code>ReleaseId</code> that described the kjar
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     * 
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newDefaultBuilder(ReleaseId releaseId);
    
    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * This one is tailored to works smoothly with kjars as the notion of kbase and ksessions
     * @param releaseId <code>ReleaseId</code> that described the kjar
     * @param kbaseName name of the kbase defined in kmodule.xml stored in kjar
     * @param ksessionName name of the ksession define in kmodule.xml stored in kjar
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     * 
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newDefaultBuilder(ReleaseId releaseId, String kbaseName, String ksessionName);
    
    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * It relies on KieClasspathContainer that requires to have kmodule.xml present in META-INF folder which 
     * defines the kjar itself.
     * Expects to use default kbase and ksession from kmodule.
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     * 
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newClasspathKmoduleDefaultBuilder();
    
    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * It relies on KieClasspathContainer that requires to have kmodule.xml present in META-INF folder which 
     * defines the kjar itself.
     * @param kbaseName name of the kbase defined in kmodule.xml
     * @param ksessionName name of the ksession define in kmodule.xml   
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     * 
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newClasspathKmoduleDefaultBuilder(String kbaseName, String ksessionName);

Besides KieSession Runtime Manager provides access to TaskService too as integrated component of a RuntimeEngine that will always be configured and ready for communication between process engine and task service.

Since the default builder was used, it will already come with predefined set of elements that consists of:

To extend it with your own handlers or listeners a dedicated mechanism is provided that comes as implementation of RegisterableItemsFactory

	/**
	 * Returns new instances of <code>WorkItemHandler</code> that will be registered on <code>RuntimeEngine</code>
	 * @param runtime provides <code>RuntimeEngine</code> in case handler need to make use of it internally
	 * @return map of handlers to be registered - in case of no handlers empty map shall be returned.
	 */
    Map<String, WorkItemHandler> getWorkItemHandlers(RuntimeEngine runtime);
    
    /**
	 * Returns new instances of <code>ProcessEventListener</code> that will be registered on <code>RuntimeEngine</code>
	 * @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally
	 * @return list of listeners to be registered - in case of no listeners empty list shall be returned.
	 */
    List<ProcessEventListener> getProcessEventListeners(RuntimeEngine runtime);
    
    /**
	 * Returns new instances of <code>AgendaEventListener</code> that will be registered on <code>RuntimeEngine</code>
	 * @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally
	 * @return list of listeners to be registered - in case of no listeners empty list shall be returned.
	 */
    List<AgendaEventListener> getAgendaEventListeners(RuntimeEngine runtime);
    
    /**
	 * Returns new instances of <code>WorkingMemoryEventListener</code> that will be registered on <code>RuntimeEngine</code>
	 * @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally
	 * @return list of listeners to be registered - in case of no listeners empty list shall be returned.
	 */
    List<WorkingMemoryEventListener> getWorkingMemoryEventListeners(RuntimeEngine runtime);

A best practice is to just extend those that come out of the box and just add your own. Extensions are not always needed as the default implementations of RegisterableItemsFactory provides possibility to define custom handlers and listeners. Following is a list of available implementations that might be useful (they are ordered in the hierarchy of inheritance):

Alternatively, simple (stateless or requiring only KieSession) work item handlers might be registered in the well known way - defined as part of CustomWorkItem.conf file that shall be placed on class path. To use this approach do following:

And that's it, now all these work item handlers will be registered for any KieSession created by that application, regardless if it uses RuntimeManager or not.

When using RuntimeManager in CDI environment there are dedicated interfaces that can be used to provide custom WorkItemHandlers and EventListeners to the RuntimeEngine.

public interface WorkItemHandlerProducer {

    /**
     * Returns map of (key = work item name, value work item handler instance) of work items 
     * to be registered on KieSession
     * <br/>
     * Parameters that might be given are as follows:
     * <ul>
     *  <li>ksession</li>
     *  <li>taskService</li>
     *  <li>runtimeManager</li>
     * </ul>
     * 
     * @param identifier - identifier of the owner - usually RuntimeManager that allows the producer to filter out
     * and provide valid instances for given owner
     * @param params - owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances
     * @return map of work item handler instances (recommendation is to always return new instances when this method is invoked)
     */
    Map<String, WorkItemHandler> getWorkItemHandlers(String identifier, Map<String, Object> params);
}

Event listener producer shall be annotated with proper qualifier to indicate what type of listeners they provide, so pick one of following to indicate they type:

public interface EventListenerProducer<T> {

    /**
     * Returns list of instances for given (T) type of listeners
     * <br/>
     * Parameters that might be given are as follows:
     * <ul>
     *  <li>ksession</li>
     *  <li>taskService</li>
     *  <li>runtimeManager</li>
     * </ul>
     * @param identifier - identifier of the owner - usually RuntimeManager that allows the producer to filter out
     * and provide valid instances for given owner
     * @param params - owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances
     * @return list of listener instances (recommendation is to always return new instances when this method is invoked)
     */
    List<T> getEventListeners(String identifier, Map<String, Object>  params);
}

Implementations of these interfaces shall be packaged as bean archive (includes beans.xml inside META-INF) and placed on application classpath (e.g. WEB-INF/lib for web application). THat is enough for CDI based RuntimeManager to discover them and register on every KieSession that is created or loaded from data store.

Some parameters are provided to the producers to allow handlers/listeners to be more stateful and be able to do more advanced things with the engine - like signal of the engine or process instance in case of an error. Thus all components are provided:

In addition, some filtering can be applied based on identifier (that is given as argument to the methods) to decide if given RuntimeManager shall receive handlers/listeners or not.

On top of RuntimeManager API a set of high level services has been provided from jBPM version 6.2. These services are meant to be the easiest way to embed (j)BPM capabilities into custom application. A complete set of modules are delivered as part of these services. They are partitioned into several modules to ease thier adoptions in various environments.

Service modules are grouped with its framework dependencies, so developers are free to choose which one is suitable for them and use only that.

Upon deployment, every process definition is scanned using definition service that parses the process and extracts valuable information out of it. These information can provide valuable input to the system to inform users about what is expected. Definition service provides information about:

So definition service can be seen as sort of supporting service that provides quite a few information about process definition that are extracted directly from BPMN2.


String processId = "org.jbpm.writedocument";
         
Collection<UserTaskDefinition> processTasks = 
bpmn2Service.getTasksDefinitions(deploymentUnit.getIdentifier(), processId);
         
Map<String, String> processData = 
bpmn2Service.getProcessVariables(deploymentUnit.getIdentifier(), processId);
         
Map<String, String> taskInputMappings = 
bpmn2Service.getTaskInputMappings(deploymentUnit.getIdentifier(), processId, "Write a Document" );

While it usually is used with combination of other services (like deployment service) it can be used standalone as well to get details about process definition that do not come from kjar. This can be achieved by using buildProcessDefinition method of definition service.


public interface DefinitionService {
	
    ProcessDefinition buildProcessDefinition(String deploymentId, String bpmn2Content,
			ClassLoader classLoader, boolean cache) throws IllegalArgumentException;

    ProcessDefinition getProcessDefinition(String deploymentId, String processId);
    
    Collection<String> getReusableSubProcesses(String deploymentId, String processId);
    
    Map<String, String> getProcessVariables(String deploymentId, String processId);
    
    Map<String, String> getServiceTasks(String deploymentId, String processId);
    
    Map<String, Collection<String>> getAssociatedEntities(String deploymentId, String processId);
    
    Collection<UserTaskDefinition> getTasksDefinitions(String deploymentId, String processId);
    
    Map<String, String> getTaskInputMappings(String deploymentId, String processId, String taskName);
    
    Map<String, String> getTaskOutputMappings(String deploymentId, String processId, String taskName);
	
}

Process service is the one that usually is of the most interest. Once the deployment and definition service was already used to feed the system with something that can be executed. Process service provides access to execution environment that allows:

At the same time process service is a command executor so it allows to execute commands (essentially on ksession) to extend its capabilities.

Important to note is that process service is focused on runtime operations so use it whenever there is a need to alter (signal, change variables, etc) process instance and not for read operations like show available process instances by looping though given list and invoking getProcessInstance method. For that there is dedicated runtime data service that is described below.

An example on how to deploy and run process can be done as follows:


KModuleDeploymentUnit deploymentUnit = new KModuleDeploymentUnit(GROUP_ID, ARTIFACT_ID, VERSION);
         
deploymentService.deploy(deploymentUnit);
 
long processInstanceId = processService.startProcess(deploymentUnit.getIdentifier(), "customtask");
      
ProcessInstance pi = processService.getProcessInstance(processInstanceId);   

As you can see start process expects deploymentId as first argument. This is extremely powerful to enable service to easily work with various deployments, even with same processes but coming from different versions - kjar versions.


public interface ProcessService {
	
    Long startProcess(String deploymentId, String processId);

    Long startProcess(String deploymentId, String processId, Map<String, Object> params);

    void abortProcessInstance(Long processInstanceId);
    
    void abortProcessInstances(List<Long> processInstanceIds);

    void signalProcessInstance(Long processInstanceId, String signalName, Object event);
    
    void signalProcessInstances(List<Long> processInstanceIds, String signalName, Object event);
    
    ProcessInstance getProcessInstance(Long processInstanceId);

    void setProcessVariable(Long processInstanceId, String variableId, Object value);
    
    void setProcessVariables(Long processInstanceId, Map<String, Object> variables);
    
    Object getProcessInstanceVariable(Long processInstanceId, String variableName);

    Map<String, Object> getProcessInstanceVariables(Long processInstanceId);
    
    Collection<String> getAvailableSignals(Long processInstanceId);
    
    void completeWorkItem(Long id, Map<String, Object> results);

    void abortWorkItem(Long id);
    
    WorkItem getWorkItem(Long id);

    List<WorkItem> getWorkItemByProcessInstance(Long processInstanceId);
    
    public <T> T execute(String deploymentId, Command<T> command);
    
    public <T> T execute(String deploymentId, Context<?> context, Command<T> command);

}

Runtime data service as name suggests, deals with all that refers to runtime information:

Use this service as main source of information whenever building list based UI - to show process definitions, process instances, tasks for given user, etc. This service was designed to be as efficient as possible and still provide all required information.

Some examples:

There are two important arguments that the runtime data service operations supports:

These provide capabilities for efficient management result set like pagination, sorting and ordering (QueryContext). Moreover additional filtering can be applied to task queries to provide more advanced capabilities when searching for user tasks.


public interface RuntimeDataService {
  
    // Process instance information
    
    Collection<ProcessInstanceDesc> getProcessInstances(QueryContext queryContext);
   
    Collection<ProcessInstanceDesc> getProcessInstances(List<Integer> states, String initiator, QueryContext queryContext);
   
    Collection<ProcessInstanceDesc> getProcessInstancesByProcessId(List<Integer> states, String processId, String initiator, QueryContext queryContext);
   
    Collection<ProcessInstanceDesc> getProcessInstancesByProcessName(List<Integer> states, String processName, String initiator, QueryContext queryContext);
    
    Collection<ProcessInstanceDesc> getProcessInstancesByDeploymentId(String deploymentId, List<Integer> states, QueryContext queryContext);
    
    ProcessInstanceDesc getProcessInstanceById(long processInstanceId);
    
    Collection<ProcessInstanceDesc> getProcessInstancesByProcessDefinition(String processDefId, QueryContext queryContext);
    
    Collection<ProcessInstanceDesc> getProcessInstancesByProcessDefinition(String processDefId, List<Integer> states, QueryContext queryContext);

    
    // Node and Variable instance information
   
    NodeInstanceDesc getNodeInstanceForWorkItem(Long workItemId);

    Collection<NodeInstanceDesc> getProcessInstanceHistoryActive(long processInstanceId, QueryContext queryContext);

    Collection<NodeInstanceDesc> getProcessInstanceHistoryCompleted(long processInstanceId, QueryContext queryContext);

    Collection<NodeInstanceDesc> getProcessInstanceFullHistory(long processInstanceId, QueryContext queryContext);
    
    Collection<NodeInstanceDesc> getProcessInstanceFullHistoryByType(long processInstanceId, EntryType type, QueryContext queryContext);

    Collection<VariableDesc> getVariablesCurrentState(long processInstanceId);

    Collection<VariableDesc> getVariableHistory(long processInstanceId, String variableId, QueryContext queryContext);

    
    // Process information
  
    Collection<ProcessDefinition> getProcessesByDeploymentId(String deploymentId, QueryContext queryContext);   
    
    Collection<ProcessDefinition> getProcessesByFilter(String filter, QueryContext queryContext);

    Collection<ProcessDefinition> getProcesses(QueryContext queryContext);
   
    Collection<String> getProcessIds(String deploymentId, QueryContext queryContext);
   
    ProcessDefinition getProcessById(String processId);
  
    ProcessDefinition getProcessesByDeploymentIdProcessId(String deploymentId, String processId);
    
	// user task query operations

    UserTaskInstanceDesc getTaskByWorkItemId(Long workItemId);

    UserTaskInstanceDesc getTaskById(Long taskId);

    List<TaskSummary> getTasksAssignedAsBusinessAdministrator(String userId, QueryFilter filter);
	
    List<TaskSummary> getTasksAssignedAsBusinessAdministratorByStatus(String userId, List<Status> statuses, QueryFilter filter);

    List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, QueryFilter filter);
	
    List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, List<String> groupIds, QueryFilter filter);

    List<TaskSummary> getTasksAssignedAsPotentialOwnerByStatus(String userId, List<Status> status, QueryFilter filter);
	
    List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, List<String> groupIds, List<Status> status, QueryFilter filter);
	
    List<TaskSummary> getTasksAssignedAsPotentialOwnerByExpirationDateOptional(String userId, List<Status> status, Date from, QueryFilter filter);
	
    List<TaskSummary> getTasksOwnedByExpirationDateOptional(String userId, List<Status> strStatuses, Date from, QueryFilter filter);

    List<TaskSummary> getTasksOwned(String userId, QueryFilter filter);

    List<TaskSummary> getTasksOwnedByStatus(String userId, List<Status> status, QueryFilter filter);

    List<Long> getTasksByProcessInstanceId(Long processInstanceId);

    List<TaskSummary> getTasksByStatusByProcessInstanceId(Long processInstanceId, List<Status> status, QueryFilter filter);
        
    List<AuditTask> getAllAuditTask(String userId, QueryFilter filter);
	    
}

Deployment Service provides convinient way to put business assets to an execution environment but there are cases that requires some additional management to make them available in right context.

Activation and Deactivation of deployments

Imagine situation where there are number of processes already running of given deployment and then new version of these processes comes into the runtime environment. With that administrator can decide that new instances of given process definition should be using new version only while already active instances should continue with the previous version.

To help with that deployment service has been equipped with following methods:

This feature allows smooth transition between project versions whitout need of process instance migration.

Deployment synchronization

Prior to jBPM 6.2, jbpm services did not have deployment store by default. When embedded in jbpm-console/kie-wb they utilized sistem.git VFS repository to preserve deployed units across server restarts. While that works fine, it comes with some drawbacks:

With version 6.2 jbpm services come with deployment synchronizer that stores available deployments into data base, including its deployment descriptor. At the same time it constantly monitors that table to keep it in sync with other installations that might be using same data source. This is especially important when running in cluster or when jbpm console runs next to custom application and both should be able to operate on the same artifacts.

By default synchronization must be configured (when runing as core services while it is automatically enabled for ejb and cdi extensions). To configure synchronization following needs to be configured:


TransactionalCommandService commandService = new TransactionalCommandService(emf);

DeploymentStore store = new DeploymentStore();
store.setCommandService(commandService);

DeploymentSynchronizer sync = new DeploymentSynchronizer();
sync.setDeploymentService(deploymentService);
sync.setDeploymentStore(store);

DeploymentSyncInvoker invoker = new DeploymentSyncInvoker(sync, 2L, 3L, TimeUnit.SECONDS);
invoker.start();
....
invoker.stop();

With this, deployments will be synchronized every 3 seconds with initial delay of two seconds.

Invoking latest version of project's processes

In case there is a need to always work with latest version of project's process, services allow to interact with various operations using deployment id with latest keyword. Let's go over an example to better understand the feature.

Initially deployed unit is org.jbpm:HR:1.0 which has the first version of an hiring process. After several weeks, new version is developed and deployed to the execution server - org.jbpm:HR.2.0 with version 2 of the hiring process.

To allow callers of the services to interact without being worried if they work with latest version, they can use following deployment id:

org.jbpm.HR:latest

this will alwyas find out latest available version of project that is identified by:

version comparizon is based on Maven version numbers and relies on Maen based algorithm to find the latest one.

Here is a complete example with deployment of multiple versions and interacting always with the latest:


KModuleDeploymentUnit deploymentUnitV1 = new KModuleDeploymentUnit("org.jbpm", "HR", "1.0");
deploymentService.deploy(deploymentUnitV1);

long processInstanceId = processService.startProcess("org.jbpm:HR:LATEST", "customtask");
ProcessInstanceDesc piDesc = runtimeDataService.getProcessInstanceById(processInstanceId); 

// we have started process with project's version 1
assertEquals(deploymentUnitV1.getIdentifier(), piDesc.getDeploymentId());

// next we deploy version 1
KModuleDeploymentUnit deploymentUnitV2 = new KModuleDeploymentUnit("org.jbpm", "HR", "2.0");
deploymentService.deploy(deploymentUnitV2);

processInstanceId = processService.startProcess("org.jbpm:HR:LATEST", "customtask");
piDesc = runtimeDataService.getProcessInstanceById(processInstanceId); 

// this time we have started process with project's version 2
assertEquals(deploymentUnitV2.getIdentifier(), piDesc.getDeploymentId());

As illustrated this provides very powerful feature when interacting with frequently chaning environment that allows to always be up to date when it comes to use of process definitions.

There are several control parameters available to alter engine default behavior. This allows to fine tune the execution for the environment needs and actual requirements. All of these parameters are set as JVM system properties, usually with -D when starting program e.g. application server.

Table 5.1. Control parameters

NamePossible valuesDefault valueDescription 
jbpm.ut.jndi.lookupString Alternative JNDI name to be used when there is no access to the default one (java:comp/UserTransaction) 
jbpm.enable.multi.contrue|falsefalseEnables multiple incoming/outgoing sequence flows support for activities 
jbpm.business.calendar.propertiesString/jbpm.business.calendar.propertiesAllows to provide alternative classpath location of business calendar configuration file 
jbpm.overdue.timer.delayLong2000Specifies delay for overdue timers to allow proper initialization, in milliseconds 
jbpm.process.name.comparatorString Allows to provide alternative comparator class to empower start process by name feature, if not set NumberVersionComparator is used 
jbpm.loop.level.disabledtrue|falsetrueAllows to enable or disable loop iteration tracking, to allow advanced loop support when using XOR gateways 
org.kie.mail.sessionStringmail/jbpmMailSessionAllows to provide alternative JNDI name for mail session used by Task Deadlines 
jbpm.usergroup.callback.propertiesString/jbpm.usergroup.callback.propertiesAllows to provide alternative classpath location for user group callback implementation (LDAP, DB) 
jbpm.user.group.mappingString${jboss.server.config.dir}/roles.propertiesAllows to provide alternative location of roles.properties for JBossUserGroupCallbackImpl 
jbpm.user.info.propertiesString/jbpm.user.info.propertiesAllows to provide alternative classpath location of user info configuration (used by LDAPUserInfoImpl) 
org.jbpm.ht.user.separatorString,Allows to provide alternative separator of actors and groups for user tasks, default is comma (,) 
org.quartz.propertiesString Allows to provide location of the quartz config file to activate quartz based timer service 
jbpm.data.dirString${jboss.server.data.dir} is available otherwise ${java.io.tmpdir}Allows to provide location where data files produced by jbpm should be stored 
org.kie.executor.pool.sizeInteger1Allows to provide thread pool size for jbpm executor 
org.kie.executor.retry.countInteger3Allows to provide number of retries attempted in case of error by jbpm executor 
org.kie.executor.intervalInteger3Allows to provide frequency used to check for pending jobs by jbpm executor, in seconds 
org.kie.executor.disabledtrue|falsetrueEnables or disable jbpm executor 
org.kie.store.services.classStringorg.drools.persistence.jpa.KnowledgeStoreServiceImplFully qualified name of the class that implements KieStoreServices that will be responsible for bootstraping KieSession instances 

The Business Process Model and Notation (BPMN) 2.0 specification is an OMG specification that not only defines a standard on how to graphically represent a business process (like BPMN 1.x), but now also includes execution semantics for the elements defined, and an XML format on how to store (and share) process definitions.

jBPM6 allows you to execute processes defined using the BPMN 2.0 XML format. That means that you can use all the different jBPM6 tooling to model, execute, manage and monitor your business processes using the BPMN 2.0 format for specifying your executable business processes. Actually, the full BPMN 2.0 specification also includes details on how to represent things like choreographies and collaboration. The jBPM project however focuses on that part of the specification that can be used to specify executable processes.

Executable processes in BPMN consist of a different types of nodes being connected to each other using sequence flows. The BPMN 2.0 specification defines three main types of nodes:

jBPM6 does not implement all elements and attributes as defined in the BPMN 2.0 specification. We do however support a significant subset, including the most common node types that can be used inside executable processes. This includes (almost) all elements and attributes as defined in the "Common Executable" subclass of the BPMN 2.0 specification, extended with some additional elements and attributes we believe are valuable in that context as well. The full set of elements and attributes that are supported can be found below, but it includes elements like:

For example, consider the following "Hello World" BPMN 2.0 process, which does nothing more that writing out a "Hello World" statement when the process is started.

An executable version of this process expressed using BPMN 2.0 XML would look something like this:

<?xml version="1.0" encoding="UTF-8"?> 
<definitions id="Definition"
             targetNamespace="http://www.example.org/MinimalExample"
             typeLanguage="http://www.java.com/javaTypes"
             expressionLanguage="http://www.mvel.org/2.0"
             xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"
             xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
             xs:schemaLocation="http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd"
             xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI"
             xmlns:dc="http://www.omg.org/spec/DD/20100524/DC"
             xmlns:di="http://www.omg.org/spec/DD/20100524/DI"
             xmlns:tns="http://www.jboss.org/drools">

  <process processType="Private" isExecutable="true" id="com.sample.HelloWorld" name="Hello World" >

    <!-- nodes -->
    <startEvent id="_1" name="StartProcess" />
    <scriptTask id="_2" name="Hello" >
      <script>System.out.println("Hello World");</script>
    </scriptTask>
    <endEvent id="_3" name="EndProcess" >
        <terminateEventDefinition/>
    </endEvent>

    <!-- connections -->
    <sequenceFlow id="_1-_2" sourceRef="_1" targetRef="_2" />
    <sequenceFlow id="_2-_3" sourceRef="_2" targetRef="_3" />

  </process>

  <bpmndi:BPMNDiagram>
    <bpmndi:BPMNPlane bpmnElement="Minimal" >
      <bpmndi:BPMNShape bpmnElement="_1" >
        <dc:Bounds x="15" y="91" width="48" height="48" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNShape bpmnElement="_2" >
        <dc:Bounds x="95" y="88" width="83" height="48" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNShape bpmnElement="_3" >
        <dc:Bounds x="258" y="86" width="48" height="48" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNEdge bpmnElement="_1-_2" >
        <di:waypoint x="39" y="115" />
        <di:waypoint x="75" y="46" />
        <di:waypoint x="136" y="112" />
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNEdge bpmnElement="_2-_3" >
        <di:waypoint x="136" y="112" />
        <di:waypoint x="240" y="240" />
        <di:waypoint x="282" y="110" />
      </bpmndi:BPMNEdge>
    </bpmndi:BPMNPlane>
  </bpmndi:BPMNDiagram>

</definitions>

To create your own process using BPMN 2.0 format, you can

The following code fragment shows you how to load a BPMN2 process into your knowledge base ...


private static KnowledgeBase createKnowledgeBase() throws Exception {
    KieHelper kieHelper = new KieHelper();
    KieBase kieBase = kieHelper
    .addResource(ResourceFactory.newClassPathResource("sample.bpmn2"))
    .build();

    return kieBase;
}

... and how to execute this process ...

KieBase kbase = createKnowledgeBase();
KieSession ksession = kbase.newKieSession();
ksession.startProcess("com.sample.HelloWorld");

For more detail, check out the chapter on the API and the basics.


A business process is a graph that describes the order in which a series of steps need to be executed, using a flow chart. A process consists of a collection of nodes that are linked to each other using connections. Each of the nodes represents one step in the overall process while the connections specify how to transition from one node to the other. A large selection of predefined node types have been defined. This chapter describes how to define such processes and use them in your application.

Processes can be created by using one of the following three methods:

The graphical BPMN2 editor is an editor that allows you to create a process by dragging and dropping different nodes on a canvas and editing the properties of these nodes. The graphical BPMN2 modeler is an Eclipse plugin hosted on eclipse.org that provides number of contributors where one of them is jBPM project. Once you have set up a jBPM project (see the installer for creating a working Eclipse environment where you can start), you can start adding processes. When in a project, launch the "New" wizard (use Ctrl+N) or right-click the directory you would like to put your process in and select "New", then "File". Give the file a name and the extension bpmn (e.g. MyProcess.bpmn). This will open up the process editor (you can safely ignore the warning that the file could not be read, this is just because the file is still empty).

First, ensure that you can see the Properties View down the bottom of the Eclipse window, as it will be necessary to fill in the different properties of the elements in your process. If you cannot see the properties view, open it using the menu "Window", then "Show View" and "Other...", and under the "General" folder select the Properties View.


The process editor consists of a palette, a canvas and an outline view. To add new elements to the canvas, select the element you would like to create in the palette and then add them to the canvas by clicking on the preferred location. For example, click on the "End Event" icon in the palette of the GUI. Clicking on an element in your process allows you to set the properties of that element. You can connect the nodes (as long as it is permitted by the different types of nodes) by using "Sequence Flow" from the palette.

You can keep adding nodes and connections to your process until it represents the business logic that you want to specify.

It is also possible to specify processes using the underlying BPMN 2.0 XML directly. The syntax of these XML processes is defined using the BPMN 2.0 XML Schema Definition. For example, the following XML fragment shows a simple process that contains a sequence of a Start Event, a Script Task that prints "Hello World" to the console, and an End Event.

<?xml version="1.0" encoding="UTF-8"?> 
<definitions id="Definition"
             targetNamespace="http://www.jboss.org/drools"
             typeLanguage="http://www.java.com/javaTypes"
             expressionLanguage="http://www.mvel.org/2.0"
             xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"Rule Task
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd"
             xmlns:g="http://www.jboss.org/drools/flow/gpd"
             xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI"
             xmlns:dc="http://www.omg.org/spec/DD/20100524/DC"
             xmlns:di="http://www.omg.org/spec/DD/20100524/DI"
             xmlns:tns="http://www.jboss.org/drools">

  <process processType="Private" isExecutable="true" id="com.sample.hello" name="Hello Process" >

    <!-- nodes -->
    <startEvent id="_1" name="Start" />
    <scriptTask id="_2" name="Hello" >
      <script>System.out.println("Hello World");</script>
    </scriptTask>
    <endEvent id="_3" name="End" >
        <terminateEventDefinition/>
    </endEvent>

    <!-- connections -->
    <sequenceFlow id="_1-_2" sourceRef="_1" targetRef="_2" />
    <sequenceFlow id="_2-_3" sourceRef="_2" targetRef="_3" />

  </process>

  <bpmndi:BPMNDiagram>
    <bpmndi:BPMNPlane bpmnElement="com.sample.hello" >
      <bpmndi:BPMNShape bpmnElement="_1" >
        <dc:Bounds x="16" y="16" width="48" height="48" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNShape bpmnElement="_2" >
        <dc:Bounds x="96" y="16" width="80" height="48" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNShape bpmnElement="_3" >
        <dc:Bounds x="208" y="16" width="48" height="48" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNEdge bpmnElement="_1-_2" >
        <di:waypoint x="40" y="40" />
        <di:waypoint x="136" y="40" />
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNEdge bpmnElement="_2-_3" >
        <di:waypoint x="136" y="40" />
        <di:waypoint x="232" y="40" />
      </bpmndi:BPMNEdge>
    </bpmndi:BPMNPlane>
  </bpmndi:BPMNDiagram>

</definitions>

The process XML file consists of two parts, the top part (the "process" element) contains the definition of the different nodes and their properties, the lower part (the "BPMNDiagram" element) contains all graphical information, like the location of the nodes. The process XML consist of exactly one <process> element. This element contains parameters related to the process (its type, name, id and package name), and consists of three subsections: a header section (where process-level information like variables, globals, imports and lanes can be defined), a nodes section that defines each of the nodes in the process, and a connections section that contains the connections between all the nodes in the process. In the nodes section, there is a specific element for each node, defining the various parameters and, possibly, sub-elements for that node type.



Represents a script that should be executed in this process. A Script Task should have one incoming connection and one outgoing connection. The associated action specifies what should be executed, the dialect used for coding the action (i.e., Java, JavaScript or MVEL), and the actual action code. This code can access any variables and globals. There is also a predefined variable kcontext that references the ProcessContext object (which can, for example, be used to access the current ProcessInstance or NodeInstance, and to get and set variables, or get access to the ksession using kcontext.getKieRuntime()). When a Script Task is reached in the process, it will execute the action and then continue with the next node. It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • Action: The action script associated with this action node.

Note that you can write any valid Java code inside a script node. This basically allows you to do anything inside such a script node. There are some caveats however:

  • When trying to create a higher-level business process, that should also be understood by business users, it is probably wise to avoid low-level implementation details inside the process, including inside these script tasks. A Script Task could still be used to quickly manipulate variables etc. but other concepts like a Service Task could be used to model more complex behaviour in a higher-level manner.
  • Scripts should be immediate. They are using the engine thread to execute the script. Scripts that could take some time to execute should probably be modeled as an asynchronous Service Task.
  • You should try to avoid contacting external services through a script node. Not only does this usually violate the first two caveats, it is also interacting with external services without the knowledge of the engine, which can be problematic, especially when using persistence and transactions. In general, it is probably wiser to model communication with an external service using a service task.
  • Scripts should not throw exceptions. Runtime exceptions should be caught and for example managed inside the script or transformed into signals or errors that can then be handled inside the process.


Represents an (abstract) unit of work that should be executed in this process. All work that is executed outside the process engine should be represented (in a declarative way) using a Service Task. Different types of services are predefined, e.g., sending an email, logging a message, etc. Users can define domain-specific services or work items, using a unique name and by defining the parameters (input) and results (output) that are associated with this type of work. Check the chapter on domain-specific processes for a detailed explanation and illustrative examples of how to define and use work items in your processes. When a Service Task is reached in the process, the associated work is executed. A Service Task should have one incoming connection and one outgoing connection.

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • Parameter mapping: Allows copying the value of process variables to parameters of the work item. Upon creation of the work item, the values will be copied.

  • Result mapping: Allows copying the value of result parameters of the work item to a process variable. Each type of work can define result parameters that will (potentially) be returned after the work item has been completed. A result mapping can be used to copy the value of the given result parameter to the given variable in this process. For example, the "FileFinder" work item returns a list of files that match the given search criteria within the result parameter Files. This list of files can then be bound to a process variable for use within the process. Upon completion of the work item, the values will be copied.

  • On-entry and on-exit actions: Actions that are executed upon entry or exit of this node, respectively.

  • Additional parameters: Each type of work item can define additional parameters that are relevant for that type of work. For example, the "Email" work item defines additional parameters such as From, To, Subject and Body. The user can either provide values for these parameters directly, or define a parameter mapping that will copy the value of the given variable in this process to the given parameter; if both are specified, the mapping will have precedence. Parameters of type String can use #{expression} to embed a value in the string. The value will be retrieved when creating the work item, and the substitution expression will be replaced by the result of calling toString() on the variable. The expression could simply be the name of a variable (in which case it resolves to the value of the variable), but more advanced MVEL expressions are possible as well, e.g., #{person.name.firstname}.


Processes can also involve tasks that need to be executed by human actors. A User Task represents an atomic task to be executed by a human actor. It should have one incoming connection and one outgoing connection. User Tasks can be used in combination with Swimlanes to assign multiple human tasks to similar actors. Refer to the chapter on human tasks for more details. A User Task is actually nothing more than a specific type of service node (of type "Human Task"). A User Task contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • TaskName: The name of the human task.

  • Priority: An integer indicating the priority of the human task.

  • Comment: A comment associated with the human task.

  • ActorId: The actor id that is responsible for executing the human task. A list of actor id's can be specified using a comma (',') as separator.

  • GroupId: The group id that is responsible for executing the human task. A list of group id's can be specified using a comma (',') as separator.

  • Skippable: Specifies whether the human task can be skipped, i.e., whether the actor may decide not to execute the task.

  • Content: The data associated with this task.

  • Swimlane: The swimlane this human task node is part of. Swimlanes make it easy to assign multiple human tasks to the same actor. See the human tasks chapter for more detail on how to use swimlanes.

  • On entry and on exit actions: Action scripts that are executed upon entry and exit of this node, respectively.

  • Parameter mapping: Allows copying the value of process variables to parameters of the human task. Upon creation of the human tasks, the values will be copied.

  • Result mapping: Allows copying the value of result parameters of the human task to a process variable. Upon completion of the human task, the values will be copied. A human task has a result variable "Result" that contains the data returned by the human actor. The variable "ActorId" contains the id of the actor that actually executed the task.

A user task should define the type of task that needs to be executed (using properties like TaskName, Comment, etc.) and who needs to perform it (using either actorId or groupId). Note that if there is data related to this specific process instance that the end user needs when performing the task, this data should be passed as the content of the task. The task for example does not have access to process variables. Check out the chapter on human tasks to get more detail on how to pass data between human tasks and the process instance.


Represents the invocation of another process from within this process. A sub-process node should have one incoming connection and one outgoing connection. When a Reusable Sub-Process node is reached in the process, the engine will start the process with the given id. It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • ProcessId: The id of the process that should be executed.

  • Wait for completion (by default true): If this property is true, this sub-process node will only continue if the child process that was started has terminated its execution (completed or aborted); otherwise it will continue immediately after starting the subprocess (so it will not wait for its completion).

  • Independent (by default true): If this property is true, the child process is started as an independent process, which means that the child process will not be terminated if this parent process is completed (or this sub-process node is canceled for some other reason); otherwise the active sub-process will be canceled on termination of the parent process (or cancellation of the sub-process node). Note that you can only set independent to "false" only when "Wait for completion" is set to true.

  • On-entry and on-exit actions: Actions that are executed upon entry or exit of this node, respectively.

  • Parameter in/out mapping: A sub-process node can also define in- and out-mappings for variables. The variables given in the "in" mapping will be used as parameters (with the associated parameter name) when starting the process. The variables of the child process that are defined for the "out" mappings will be copied to the variables of this process when the child process has been completed. Note that you can use "out" mappings only when "Wait for completion" is set to true.


A Multiple Instance sub-process is a special kind of sub-process that allows you to execute the contained process segment multiple times, once for each element in a collection. A multiple instance sub-process should have one incoming connection and one outgoing connection. It waits until the embedded process fragment is completed for each of the elements in the given collection before continuing. It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • CollectionExpression: The name of a variable that represents the collection of elements that should be iterated over. The collection variable should be an array or of type java.util.Collection. If the collection expression evaluates to null or an empty collection, the multiple instances sub-process will be completed immediately and follow its outgoing connection.

  • VariableName: The name of the variable to contain the current element from the collection. This gives nodes within the composite node access to the selected element.

  • CollectionOutput: The name of a variable that represents collection of elements that will gather all output of the multi instance sub process

  • OutputVariableName: The name of the variable to contain the currentl output from the multi instance activitiy

  • CompletionCondition: MVEL expression that will be evaluated on each instance completion to check if given multi instance activity can already be completed. In case it evaluates to true all other remaining instances within multi instance activity will be canceled.


Represents a timer that can trigger one or multiple times after a given period of time. A Timer Event should have one incoming connection and one outgoing connection. The timer delay specifies how long the timer should wait before triggering the first time. When a Timer Event is reached in the process, it will start the associated timer. The timer is canceled if the timer node is canceled (e.g., by completing or aborting the enclosing process instance). Consult the section ???” for more information. The Timer Event contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • Timer delay: The delay that the node should wait before triggering the first time. The expression should be of the form [#d][#h][#m][#s][#[ms]]. This allows you to specify the number of days, hours, minutes, seconds and milliseconds (which is the default if you don't specify anything). For example, the expression "1h" will wait one hour before triggering the timer. The expression could also use #{expr} to dynamically derive the delay based on some process variable. Expr in this case could be a process variable, or a more complex expression based on a process variable (e.g. myVariable.getValue()). It does support CRON like expression as well.

  • Timer period: The period between two subsequent triggers. If the period is 0, the timer should only be triggered once. The expression should be of the form [#d][#h][#m][#s][#[ms]]. You can specify the number of days, hours, minutes, seconds and milliseconds (which is the default if you don't specify anything). For example, the expression "1h" will wait one hour before triggering the timer again. The expression could also use #{expr} to dynamically derive the period based on some process variable. Expr in this case could be a process variable, or a more complex expression based on a process variable (e.g. myVariable.getValue()).

Timer events could also be specified as boundary events on sub-processes and tasks that are not automatic tasks like script task that have no wait state as timer will not have a change to fire before task completion.


A Signal Event can be used to respond to internal or external events during the execution of the process. A Signal Event should have one incoming connections and one outgoing connection. It specifies the type of event that is expected. Whenever that type of event is detected, the node connected to this event node will be triggered. It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • EventType: The type of event that is expected.

  • VariableName: The name of the variable that will contain the data associated with this event (if any) when this event occurs.

A process instance can be signaled that a specific event occurred using

ksession.signalEvent(eventType, data, processInstanceId)

This will trigger all (active) signal event nodes in the given process instance that are waiting for that event type. Data related to the event can be passed using the data parameter. If the event node specifies a variable name, this data will be copied to that variable when the event occurs.

It is also possible to use event nodes inside sub-processes. These event nodes will however only be active when the sub-process is active.

You can also generate a signal from inside a process instance. A script (in a script task or using on entry or on exit actions) can use


kcontext.getKieRuntime().signalEvent(eventType, data, kcontext.getProcessInstance().getId());
        

A throwing signal event could also be used to model the signaling of an event.


Allows you to create branches in your process. A Diverging Gateway should have one incoming connection and two or more outgoing connections. There are three types of gateway nodes currently supported:

  • AND or parallel means that the control flow will continue in all outgoing connections simultaneously.

  • XOR or exclusive means that exactly one of the outgoing connections will be chosen. The decision is made by evaluating the constraints that are linked to each of the outgoing connections. The constraint with the lowest priority number that evaluates to true is selected. Constraints can be specified using different dialects. Note that you should always make sure that at least one of the outgoing connections will evaluate to true at runtime (the engine will throw an exception at runtime if it cannot find at least one outgoing connection).

  • OR or inclusive means that all outgoing connections whose condition evaluates to true are selected. Conditions are similar to the exclusive gateway, except that no priorities are taken into account. Note that you should make sure that at least one of the outgoing connections will evaluate to true at runtime because the engine will throw an exception at runtime if it cannot determine an outgoing connection.

It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • Type: The type of the split node, i.e., AND, XOR or OR (see above).

  • Constraints: The constraints linked to each of the outgoing connections (in case of an exclusive or inclusive gateway).

While the flow chart focuses on specifying the control flow of the process, it is usually also necessary to look at the process from a data perspective. Throughout the execution of a process, data can be retrieved, stored, passed on and used.

For storing runtime data, during the execution of the process, process variables can be used. A variable is defined by a name and a data type. This could be a basic data type, such as boolean, int, or String, or any kind of Object subclass (it must implement Serializable interface). Variables can be defined inside a variable scope. The top-level scope is the variable scope of the process itself. Subscopes can be defined using a Sub-Process. Variables that are defined in a subscope are only accessible for nodes within that scope.

Whenever a variable is accessed, the process will search for the appropriate variable scope that defines the variable. Nesting of variable scopes is allowed. A node will always search for a variable in its parent container. If the variable cannot be found, it will look in that one's parent container, and so on, until the process instance itself is reached. If the variable cannot be found, a read access yields null, and a write access produces an error message, with the process continuing its execution.

Variables can be used in various ways:

Finally, processes (and rules) all have access to globals, i.e. globally defined variables and data in the Knowledge Session. Globals are directly accessible in actions just like variables. Globals need to be defined as part of the process before they can be used. You can for example define globals by clicking the globals button when specifying an action script in the Eclipse action property editor. You can also set the value of a global from the outside using ksession.setGlobal(name, value) or from inside process scripts using kcontext.getKieRuntime().setGlobal(name,value);.

Action scripts can be used in different ways:

Actions have access to globals and the variables that are defined for the process and the predefined variable kcontext. This variable is of type ProcessContext and can be used for several tasks:

  • Getting the current node instance (if applicable). The node instance could be queried for data, such as its name and type. You can also cancel the current node instance.

    NodeInstance node = kcontext.getNodeInstance();
    String name = node.getNodeName();
  • Getting the current process instance. A process instance can be queried for data (name, id, processId, etc.), aborted or signaled an internal event.

    ProcessInstance proc = kcontext.getProcessInstance();
    proc.signalEvent( type, eventObject );
  • Getting or setting the value of variables.

  • Accessing the Knowledge Runtime allows you do things like starting a process, signaling (external) events, inserting data, etc.

jBPM supports multiple dialects, like Java, JavaScript and MVEL. Java actions should be valid Java code, same for JavaScript. MVEL actions can use the business scripting language MVEL to express the action. MVEL accepts any valid Java code but additionally provides support for nested accesses of parameters (e.g., person.name instead of person.getName()), and many other scripting improvements. Thus, MVEL expressions are more convenient for the business user. For example, an action that prints out the name of the person in the "requester" variable of the process would look like this:

// Java dialect
System.out.println( person.getName() );

// JavaScript dialect
print(person.name + '\n);

//  MVEL dialect
System.out.println( person.name );
    

Constraints can be used in various locations in your processes, for example in a diverging gateway. jBPM supports two types of constraints:

Rule constraints do not have direct access to variables defined inside the process. It is however possible to refer to the current process instance inside a rule constraint, by adding the process instance to the Working Memory and matching for the process instance in your rule constraint. We have added special logic to make sure that a variable processInstance of type WorkflowProcessInstance will only match to the current process instance and not to other process instances in the Working Memory. Note that you are however responsible yourself to insert the process instance into the session and, possibly, to update it, for example, using Java code or an on-entry or on-exit or explicit action in your process. The following example of a rule constraint will search for a person with the same name as the value stored in the variable "name" of the process:

processInstance : WorkflowProcessInstance()
Person( name == ( processInstance.getVariable("name") ) )
# add more constraints here ...

Timers wait for a predefined amount of time, before triggering, once or repeatedly. They can be used to trigger certain logic after a certain period, or to repeat some action at regular intervals.

since version 6 timers can be configured with valid ISO8601 date format that supports both one shot timers and repeatable timers. Timers can be defined as date and time representation, time duration or repeating intervals

  • Date - 2013-12-24T20:00:00.000+02:00 - fires exactly at Christmas Eve at 8PM
  • Duration - PT1S - fires once after 1 second
  • Repeatable intervals - R/PT1S - fires every second, no limit, alternatively R5/PT1S will fire 5 times every second

The timer service is responsible for making sure that timers get triggered at the appropriate times. Timers can also be canceled, meaning that the timer will no longer be triggered.

Timers can be used in two ways inside a process:

While it is recommended to define processes using the graphical editor or the underlying XML (to shield yourself from internal APIs), it is also possible to define a process using the Process API directly. The most important process model elements are defined in the packages org.jbpm.workflow.core and org.jbpm.workflow.core.node. A "fluent API" is provided that allows you to easily construct processes in a readable manner using factories. At the end, you can validate the process that you were constructing manually.

This is a simple example of a basic process with a script task only:


RuleFlowProcessFactory factory =
    RuleFlowProcessFactory.createProcess("org.jbpm.HelloWorld");
factory
    // Header
    .name("HelloWorldProcess")
    .version("1.0")
    .packageName("org.jbpm")
    // Nodes
    .startNode(1).name("Start").done()
    .actionNode(2).name("Action")
        .action("java", "System.out.println(\"Hello World\");").done()
    .endNode(3).name("End").done()
    // Connections
    .connection(1, 2)
    .connection(2, 3);
RuleFlowProcess process = factory.validate().getProcess();

KieServices ks = KieServices.Factory.get();
KieFileSystem kfs = ks.newKieFileSystem();
Resource resource = ks.getResources().newByteArrayResource(
    XmlBPMNProcessDumper.INSTANCE.dump(process).getBytes());
resource.setSourcePath("helloworld.bpmn2");
kfs.write(resource);
ReleaseId releaseId = ks.newReleaseId("org.jbpm", "helloworld", "1.0");
kfs.generateAndWritePomXML(releaseId);
ks.newKieBuilder(kfs).buildAll();
ks.newKieContainer(releaseId).newKieSession().startProcess("org.jbpm.HelloWorld");

You can see that we start by calling the static createProcess() method from the RuleFlowProcessFactory class. This method creates a new process with the given id and returns the RuleFlowProcessFactory that can be used to create the process. A typical process consists of three parts. The header part comprises global elements like the name of the process, imports, variables, etc. The nodes section contains all the different nodes that are part of the process. The connections section finally links these nodes to each other to create a flow chart.

In this example, the header contains the name and the version of the process and the package name. After that, you can start adding nodes to the current process. If you have auto-completion you can see that you have different methods to create each of the supported node types at your disposal.

When you start adding nodes to the process, in this example by calling the startNode(), actionNode() and endNode() methods, you can see that these methods return a specific NodeFactory, that allows you to set the properties of that node. Once you have finished configuring that specific node, the done() method returns you to the current RuleFlowProcessFactory so you can add more nodes, if necessary.

When you are finished adding nodes, you must connect them by creating connections between them. This can be done by calling the method connection, which will link previously created nodes.

Finally, you can validate the generated process by calling the validate() method and retrieve the created RuleFlowProcess object.

Even though business processes aren't code (we even recommend you to make them as high-level as possible and to avoid adding implementation details), they also have a life cycle like other development artefacts. And since business processes can be updated dynamically, testing them (so that you don't break any use cases when doing a modification) is really important as well.

When unit testing your process, you test whether the process behaves as expected in specific use cases, for example test the output based on the existing input. To simplify unit testing, jBPM includes a helper class called JbpmJUnitBaseTestCase (in the jbpm-test module) that you can use to greatly simplify your JUnit testing, by offering:

For example, consider the following "hello world" process containing a start event, a script task and an end event. The following JUnit test will create a new session, start the process and then verify whether the process instance completed successfully and whether these three nodes have been executed.


public class ProcessPersistenceTest extends JbpmJUnitBaseTestCase {

    public ProcessPersistenceTest() {
        // setup data source, enable persistence
        super(true, true);
    }

    @Test
    public void testProcess() {
        // create runtime manager with single process - hello.bpmn
        createRuntimeManager("hello.bpmn");
 
        // take RuntimeManager to work with process engine
        RuntimeEngine runtimeEngine = getRuntimeEngine();

        // get access to KieSession instance
        KieSession ksession = runtimeEngine.getKieSession();

        // start process
        ProcessInstance processInstance = ksession.startProcess("com.sample.bpmn.hello");

        // check whether the process instance has completed successfully
        assertProcessInstanceCompleted(processInstance.getId(), ksession);

        // check what nodes have been triggered
        assertNodeTriggered(processInstance.getId(), "StartProcess", "Hello", "EndProcess");
    }
}

JbpmJUnitBaseTestCase acts as base test case class that shall be used for jBPM related tests. It provides four usage areas:

  • JUnit life cycle methods

    • setUp: executed @Before and configures data source and EntityManagerFactory, cleans up Singleton's session id

    • tearDown: executed @After and clears out history, closes EntityManagerFactory and data source, disposes RuntimeEngines and RuntimeManager

  • Knowledge Base and KnowledgeSession management methods

    • createRuntimeManager creates RuntimeManager for given set of assets and selected strategy

    • disposeRuntimeManager disposes RuntimeManager currently active in the scope of test

    • getRuntimeEngine creates new RuntimeEngine for given context

  • Assertions

    • assertProcessInstanceCompleted

    • assertProcessInstanceAborted

    • assertProcessInstanceActive

    • assertNodeActive

    • assertNodeTriggered

    • assertProcessVarExists

    • assertNodeExists

    • assertVersionEquals

    • assertProcessNameEquals

  • Helper methods

    • getDs - returns currently configured data source

    • getEmf - returns currently configured EntityManagerFactory

    • getTestWorkItemHandler - returns test work item handler that might be registered in addition to what is registered by default

    • clearHistory - clears history log

    • setupPoolingDataSource - sets up data source

JbpmJUnitBaseTestCase supports all three predefined RuntimeManager strategies as part of the unit testing. It's enough to specify which strategy shall be used whenever creating runtime manager as part of single test:

public class ProcessHumanTaskTest extends JbpmJUnitBaseTestCase {
    
    private static final Logger logger = LoggerFactory.getLogger(ProcessHumanTaskTest.class);

    public ProcessHumanTaskTest() {
        super(true, false);
    }
    
    @Test
    public void testProcessProcessInstanceStrategy() {
        RuntimeManager manager = createRuntimeManager(Strategy.PROCESS_INSTANCE, "manager", "humantask.bpmn");
        RuntimeEngine runtimeEngine = getRuntimeEngine(ProcessInstanceIdContext.get());
        KieSession ksession = runtimeEngine.getKieSession();
        TaskService taskService = runtimeEngine.getTaskService();
        
        int ksessionID = ksession.getId();
        ProcessInstance processInstance = ksession.startProcess("com.sample.bpmn.hello");

        assertProcessInstanceActive(processInstance.getId(), ksession);
        assertNodeTriggered(processInstance.getId(), "Start", "Task 1");
        
        manager.disposeRuntimeEngine(runtimeEngine);
        runtimeEngine = getRuntimeEngine(ProcessInstanceIdContext.get(processInstance.getId()));
        
        ksession = runtimeEngine.getKieSession();
        taskService = runtimeEngine.getTaskService();
        
        assertEquals(ksessionID, ksession.getId());
        
        // let john execute Task 1
        List<TaskSummary> list = taskService.getTasksAssignedAsPotentialOwner("john", "en-UK");
        TaskSummary task = list.get(0);
        logger.info("John is executing task {}", task.getName());
        taskService.start(task.getId(), "john");
        taskService.complete(task.getId(), "john", null);

        assertNodeTriggered(processInstance.getId(), "Task 2");
        
        // let mary execute Task 2
        list = taskService.getTasksAssignedAsPotentialOwner("mary", "en-UK");
        task = list.get(0);
        logger.info("Mary is executing task {}", task.getName());
        taskService.start(task.getId(), "mary");
        taskService.complete(task.getId(), "mary", null);

        assertNodeTriggered(processInstance.getId(), "End");
        assertProcessInstanceCompleted(processInstance.getId(), ksession);
    }
}

Above is more complete example that uses PerProcessInstance runtime manager strategy and uses task service to deal with user tasks.

Real-life business processes typically include the invocation of external services (like for example a human task service, an email server or your own domain-specific services). One of the advantages of our domain-specific process approach is that you can specify yourself how to actually execute your own domain-specific nodes, by registering a handler. And this handler can be different depending on your context, allowing you to use testing handlers for unit testing your process. When you are unit testing your business process, you can register test handlers that then verify whether specific services are requested correctly, and provide test responses for those services. For example, imagine you have an email node or a human task as part of your process. When unit testing, you don't want to send out an actual email but rather test whether the email that is requested contains the correct information (for example the right to email, a personalized body, etc.).

A TestWorkItemHandler is provided by default that can be registered to collect all work items (a work item represents one unit of work, like for example sending one specific email or invoking one specific service and contains all the data related to that task) for a given type. This test handler can then be queried during unit testing to check whether specific work was actually requested during the execution of the process and that the data associated with the work was correct.

The following example describes how a process that sends out an email could be tested. This test case in particular will test whether an exception is raised when the email could not be sent (which is simulated by notifying the engine that the sending the email could not be completed). The test case uses a test handler that simply registers when an email was requested (and allows you to test the data related to the email like from, to, etc.). Once the engine has been notified the email could not be sent (using abortWorkItem(..)), the unit test verifies that the process handles this case successfully by logging this and generating an error, which aborts the process instance in this case.


public void testProcess2() {

    // create runtime manager with single process - hello.bpmn
    createRuntimeManager("sample-process.bpmn");
    // take RuntimeManager to work with process engine
    RuntimeEngine runtimeEngine = getRuntimeEngine();

    // get access to KieSession instance
    KieSession ksession = runtimeEngine.getKieSession();

    // register a test handler for "Email"
    TestWorkItemHandler testHandler = getTestWorkItemHandler();

    ksession.getWorkItemManager().registerWorkItemHandler("Email", testHandler);

    // start the process
    ProcessInstance processInstance = ksession.startProcess("com.sample.bpmn.hello2");

    assertProcessInstanceActive(processInstance.getId(), ksession);
    assertNodeTriggered(processInstance.getId(), "StartProcess", "Email");

    // check whether the email has been requested
    WorkItem workItem = testHandler.getWorkItem();
    assertNotNull(workItem);
    assertEquals("Email", workItem.getName());
    assertEquals("me@mail.com", workItem.getParameter("From"));
    assertEquals("you@mail.com", workItem.getParameter("To"));

    // notify the engine the email has been sent
    ksession.getWorkItemManager().abortWorkItem(workItem.getId());
    assertProcessInstanceAborted(processInstance.getId(), ksession);
    assertNodeTriggered(processInstance.getId(), "Gateway", "Failed", "Error");

}

You can configure whether you want to execute the JUnit tests using persistence or not. By default, the JUnit tests will use persistence, meaning that the state of all process instances will be stored in a (in-memory H2) database (which is started by the JUnit test during setup) and a history log will be used to check assertions related to execution history. When persistence is not used, process instances will only live in memory and an in-memory logger is used for history assertions.

Persistence (and setup of data source) is controlled by the super constructor and allows following

public class ProcessHumanTaskTest extends JbpmJUnitBaseTestCase {
    
    private static final Logger logger = LoggerFactory.getLogger(ProcessHumanTaskTest.class);

    public ProcessHumanTaskTest() {
        // configure this tests to not use persistence for process engine but still use it for human tasks
        super(true, false);
    }
}

jBPM supports the use of human tasks inside processes using a special User Task node defined by the BPMN2 Specification(as shown in the figure above). A User Task node represents an atomic task that needs to be executed by a human actor.

[Although jBPM has a special user task node for including human tasks inside a process, human tasks are considered the same as any other kind of external service that needs to be invoked and are therefore simply implemented as a domain-specific service. See the chapter on domain-specific processes to learn more about this.]

A User Task node contains the following core properties:

You can edit these variables in the properties view (see below) when selecting the User Task node.

A User Task node also contains the following extra properties:

Human tasks typically present some data related to the task that needs to be performed to the actor that is executing the task and usually also request the actor to provide some result data related to the execution of the task. Task forms are typically used to present this data to the actor and request results.

The data that will be used by the Task needs to be specified when we define the User Task in our Process. In order to do that we need to define which data will be copied from the process context to the task context. Notice that the data is copied, so it can be modified inside the Task context but it will not affect the process variables unless we decide to copy back the value from the task to the process context.

Most of the times Forms are used to display data to the end user. Allowing them to generate/create new data that will be propagated to the process context to be used by future activities. In order to decide how the information flow from the process to a particular task and from the task to the process we need to define which pieces of information will be automatically copied by the process engine. The following sections shows how to do these mappings by configuring the DataInputSet, DataOutputSet and the Assignments properties of a User Task.

Let's start defining the Task DataInputSet:

Both GroupId and Comment are automatically generated, so you don't need to worry about that. In this case the only user defined Data Input is called: in_name. This means that the task will be receiving information from the process context and internally this variable will be called in_name. The type is also specified here.

In the Data Outputs represent the data that will be generated by the tasks. In this case we have two variables of type String called: out_name and out_mail and two Integer variables called: out_age and out_score are defined. This means that inside the task context we will need to set the value to these variables.

Finally all the connections with the process context needs to be done in the Data Assignments. The main idea here is to define how Data Inputs and Data Outputs will be associated with process variables.

As shown in the previous screenshot, the assignments between the process variables (in this case (name, age, mail and hr_score)) and the Data Inputs and Outputs are done in the Data Assignments screen. Notice that the example uses a convention that makes it easy to know which is an internal Task variables (Data Input/Output) using the "in_" and "out_" prefix to the variable names. Using this convention you can quickly understand the Assignments screen. The first row maps the process variable called name to the data input called in_name. The second row maps the data output called out_mail to the process variable called mail, and so on.

These mappings at runtime will automatically copy the variables content from one context (process and task) to the other automatically for us.

From the perspective of a process, when a user task node is encountered during the execution, a human task is created. The process will then only leave the user task node when the associated human task has been completed or aborted.

The human task itself usually has a complete life cycle itself as well. For details beyond what is described below, please check out the WS-HumanTask specification. The following diagram is from the WS-HumanTask specification and describes the human task life cycle.

A newly created task starts in the "Created" stage. Usually, it will then automatically become "Ready", after which the task will show up on the task list of all the actors that are allowed to execute the task. The task will stay "Ready" until one of these actors claims the task, indicating that he or she will be executing it.

When a user then eventually claims the task, the status will change to "Reserved". Note that a task that only has one potential (specific) actor will automatically be assigned to that actor upon creation of the task. When the user who has claimed the task starts executing it, the task status will change from "Reserved" to "InProgress".

Lastly, once the user has performed and completed the task, the task status will change to "Completed". In this step, the user can optionally specify the result data related to the task. If the task could not be completed, the user could also indicate this by using a fault response, possibly including fault data, in which case the status would change to "Failed".

While the life cycle explained above is the normal life cycle, the specification also describes a number of other life cycle methods, including:

Only users associated with a specific task are allowed to modify or retrieve information about the task. This allows users to create a jBPM workflow with multiple tasks and yet still be assured of both the confidentiality and integrity of the task status and information associated with a task.

Some task operations will end up throwing a org.jbpm.services.task.exception.PermissionDeniedException when used with information about an unauthorized user. For example, when a user is trying to directly modify the task (for example, by trying to claim or complete the task), the PermissionDeniedException will be thrown if that user does not have the correct role for that operation. Furthermore, a user will not be able to view or retrieve tasks that the user is not involved with, especially if this is via the jBPM Console or KIE Workbench applications.

User 'Administrator' and group 'Administrators' are automatically added to each Human Task.

The permisions matrix below summarizes the actions that specific user roles are allowed to do. On the left side, possible operations are listed while user roles are listed across the top of the matrix.

The cells of the permissions matrix contain one of three possible characters, each of which indicate the user role permissions for that operation:

Furthermore, the following words or abbreviations in the table header refer to the following roles:


User roles are assigned to users by the definition of the task in the jBPM (BPMN2) process definition.

Permissions Matrices.  The following matrix describes the authorizations for all operations which modify a task:


The matrix below describes the authorizations used when retrieving task information. In short, it says that all users which have any role with regards to the specific task, are allowed to see the task. This applies to all operations that are used to retrieve any type of information about the task.


As far as the jBPM engine is concerned, human tasks are similar to any other external service that needs to be invoked and are implemented as a domain-specific service. (For more on domain-specific services, see the chapter on them here.) Because a human task is an example of such a domain-specific service, the process itself only contains a high-level, abstract description of the human task to be executed and a work item handler that is responsible for binding this (abstract) task to a specific implementation.

Users can plug in any human task service implementation, such as the one that's provided by jBPM, or they may register their own implementation. In the next paragraphs, we will describe the human task service implementation provided by jBPM.

The jBPM project provides a default implementation of a human task service based on the WS-HumanTask specification. If you do not need to integrate jBPM with another existing implementation of a human task service, you can use this service. The jBPM implementation manages the life cycle of the tasks (creation, claiming, completion, etc.) and stores the state of all the tasks, task lists, and other associated information. It also supports features like internationalization, calendar integration, different types of assignments, delegation, escalation and deadlines. The code for the implementation itself can be found in the jbpm-human-task module.

The jBPM task service implementation is based on the WS-HumanTask (WS-HT) specification. This specification defines (in detail) the model of the tasks, the life cycle, and many other features. It is very comprehensive and the first version can be found here.

The human task service exposes a Java API for managing the life cycle of tasks. This allows clients to integrate (at a low level) with the human task service. Note that end users should probably not interact with this low-level API directly, but use one of the more user-friendly task clients (see below) instead. These clients offer a graphical user interface to request task lists, claim and complete tasks, and manage tasks in general. The task clients listed below use the Java API to internally interact with the human task service. Of course, the low-level API is also available so that developers can use it in their code to interact with the human task service directly.

A task service (interface org.kie.api.task.TaskService) offers the following methods (among others) for managing the life cycle of human tasks:


              ...
              
              void start( long taskId, String userId );

              void stop( long taskId, String userId );

              void release( long taskId, String userId );

              void suspend( long taskId, String userId );

              void resume( long taskId, String userId );

              void skip( long taskId, String userId );

              void delegate(long taskId, String userId, String targetUserId);

              void complete( long taskId, String userId, Map<String, Object> results );
              
              ...
              
       

If you take a look at the method signatures you will notice that almost all of these methods take the following arguments:

There is also an internal interface that you should check for more methods to interact with the Task Service, this interface is internal until it gets tested. Future version of the External (public) interface can include some of the methods proposed in the InternalTaskService interface. If you want to make use of the methods provided by this interface you need to manually cast to InternalTaskService. One method that can be useful from this interface is getTaskContent():


               Map<String, Object> getTaskContent( long taskId );
       

This method saves you from doing all the boiler plate of getting the ContentMarshallerContext to unmarshall the serialized version of the task content. If you only want to use the stable/public API's you can just copy what this method does:


              Task taskById = taskQueryService.getTaskInstanceById(taskId);
              Content contentById = taskContentService.getContentById(taskById.getTaskData().getDocumentContentId());
              ContentMarshallerContext context = getMarshallerContext(taskById);
              Object unmarshalledObject = ContentMarshallerHelper.unmarshall(contentById.getContent(), context.getEnvironment(), context.getClassloader());
              if (!(unmarshalledObject instanceof Map)) {
                  throw new IllegalStateException(" The Task Content Needs to be a Map in order to use this method and it was: "+unmarshalledObject.getClass());
      
              }
              Map<String, Object> content = (Map<String, Object>) unmarshalledObject;
              return content;
       

Because the content of the Task can be any Object, the previous method assume that you are storing a Map of objects to work. If you are storing other than a Map you should do the correspondent checks.

In order to get access to the Task Service API it is recommended to let the Runtime Manager to make sure that everything is setup correctly. Look at the Runtime Manager section for more information. From the API perspective you should be doing something like this:


              ...
              RuntimeEngine engine = runtimeManager.getRuntimeEngine(EmptyContext.get());
              KieSession kieSession = engine.getKieSession();
              // Start a process
              kieSession.startProcess("CustomersRelationship.customers", params);
              // Do Task Operations
              TaskService taskService = engine.getTaskService();
              List<TaskSummary> tasksAssignedAsPotentialOwner = taskService.getTasksAssignedAsPotentialOwner("mary", "en-UK");
              
              // Claim Task
              taskService.claim(taskSummary.getId(), "mary");
              // Start Task
              taskService.start(taskSummary.getId(), "mary");

              ...
       

If you use this approach, there is no need to register the Task Service with the Process Engine. The Runtime Manager will do that for you automatically. If you don't use the Runtime Manager, you will be responsible for setting the LocalHTWorkItemHandler in the session in order to get the Task Service notifying the Process Engine when a task is completed, or the Process Engine notifying that a task has been created.

In jBPM 6.x the Task Service runs locally to the Process and Rule Engine and for that reason multiple light clients can be created for different Process and Rule Engine's instances. All the clients will be sharing the same database (backend storage for the tasks).

jBPM allows the persistent storage of certain information. This chapter describes these different types of persistence, and how to configure them. An example of the information stored is the process runtime state. Storing the process runtime state is necessary in order to be able to continue execution of a process instance at any point, if something goes wrong. Also, the process definitions themselves, and the history information (logs of current and previous process states already) can also be persisted.

Whenever a process is started, a process instance is created, which represents the execution of the process in that specific context. For example, when executing a process that specifies how to process a sales order, one process instance is created for each sales request. The process instance represents the current execution state in that specific context, and contains all the information related to that process instance. Note that it only contains the (minimal) runtime state that is needed to continue the execution of that process instance at some later time, but it does not include information about the history of that process instance if that information is no longer needed in the process instance.

The runtime state of an executing process can be made persistent, for example, in a database. This allows to restore the state of execution of all running processes in case of unexpected failure, or to temporarily remove running instances from memory and restore them at some later time. jBPM allows you to plug in different persistence strategies. By default, if you do not configure the process engine otherwise, process instances are not made persistent.

If you configure the engine to use persistence, it will automatically store the runtime state into the database. You do not have to trigger persistence yourself, the engine will take care of this when persistence is enabled. Whenever you invoke the engine, it will make sure that any changes are stored at the end of that invocation, at so-called safe points. Whenever something goes wrong and you restore the engine from the database, you also should not reload the process instances and trigger them manually to resume execution, as process instances will automatically resume execution if they are triggered, like for example by a timer expiring, the completion of a task that was requested by that process instance, or a signal being sent to the process instance. The engine will automatically reload process instances on demand.

The runtime persistence data should in general be considered internal, meaning that you probably should not try to access these database tables directly and especially not try to modify these directly (as changing the runtime state of process instances without the engine knowing might have unexpected side-effects). In most cases where information about the current execution state of process instances is required, the use of a history log is mostly recommended (see below). In some cases, it might still be useful to for example query the internal database tables directly, but you should only do this if you know what you are doing.

jBPM uses a binary persistence mechanism, otherwise known as marshalling, which converts the state of the process instance into a binary dataset. When you use persistence with jBPM, this mechanism is used to save or retrieve the process instance state from the database. The same mechanism is also applied to the session state and any work item states.

When the process instance state is persisted, two things happen:

Apart from the process instance state, the session itself can also store some state, such as the state of timer jobs, or the session data that any business rules would be evaluated over. This session state is stored separately as a binary blob, along with the id of the session and some metadata. You can always restore session state by reloading the session with the given id. The session id can be retrieved using ksession.getId().

Note that the process instance binary datasets are usually relatively small, as they only contain the minimal execution state of the process instance. For a simple process instance, this usually contains one or a few node instances, i.e., any node that is currently executing, and any existing variable values.

As a result of jBPM using marshalling, the data model is both simple and small:

Figure 8.1. jBPM data model

jBPM data model

The sessioninfo entity contains the state of the (knowledge) session in which the jBPM process instance is running.


The processinstanceinfo entity contains the state of the jBPM process instance.


The eventtypes entity contains information about events that a process instance will undergo or has undergone.


The workiteminfo entity contains the state of a work item.


The CorrelationKeyInfo entity contains information about correlation keys assigned to given process instance - loose relationship as this table is considered optional used only when correlation capabilities are required.


The CorrelationPropertyInfo entity contains information about correlation properties for given correlation key that is assigned to given process instance.


The ContextMappingInfo entity contains information about contextual information mapped to ksession. This is an internal part of RuntimeManager and can be considered optional when RuntimeManager is not used.


In many cases it will be useful (if not necessary) to store information about the execution of process instances, so that this information can be used afterwards. For example, sometimes we want to verify which actions have been executed for a particular process instance, or in general, we want to be able to monitor and analyze the efficiency of a particular process.

However, storing history information in the runtime database can result in the database rapidly increasing in size, not to mention the fact that monitoring and analysis queries might influence the performance of your runtime engine. This is why process execution history information can be stored separately.

This history log of execution information is created based on events that the process engine generates during execution. This is possible because the jBPM runtime engine provides a generic mechanism to listen to events. The necessary information can easily be extracted from these events and then persisted to a database. Filters can also be used to limit the scope of the logged information.

The jbpm-audit module contains an event listener that stores process-related information in a database using JPA. The data model itself contains three entities, one for process instance information, one for node instance information, and one for (process) variable instance information.


The ProcessInstanceLog table contains the basic log information about a process instance.


The NodeInstanceLog table contains more information about which nodes were actually executed inside each process instance. Whenever a node instance is entered from one of its incoming connections or is exited through one of its outgoing connections, that information is stored in this table.


The VariableInstanceLog table contains information about changes in variable instances. The default is to only generate log entries when (after) a variable changes. It's also possible to log entries before the variable (value) changes.


To log process history information in a database like this, you need to register the logger on your session like this:


EntityManagerFactory emf = ...;
StatefulKnowledgeSession ksession = ...;
AbstractAuditLogger auditLogger = AuditLoggerFactory.newJPAInstance(emf);
ksession.addProcessEventListener(auditLogger);

// invoke methods one your session here

      

To specify the database where the information should be stored, modify the file persistence.xml file to include the audit log classes as well (ProcessInstanceLog, NodeInstanceLog and VariableInstanceLog), as shown below.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>


<persistence
  version="2.0"
  xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd
  http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd"
  xmlns="http://java.sun.com/xml/ns/persistence"
  xmlns:orm="http://java.sun.com/xml/ns/persistence/orm"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance>

  <persistence-unit name="org.jbpm.persistence.jpa" transaction-type="JTA">
    <provider>org.hibernate.ejb.HibernatePersistence</provider>
    <jta-data-source>jdbc/jbpm-ds</jta-data-source>
    <mapping-file>META-INF/JBPMorm.xml</mapping-file>
    <class>org.drools.persistence.info.SessionInfo</class>
    <class>org.jbpm.persistence.processinstance.ProcessInstanceInfo</class>
    <class>org.drools.persistence.info.WorkItemInfo</class>
    <class>org.jbpm.persistence.correlation.CorrelationKeyInfo</class>
    <class>org.jbpm.persistence.correlation.CorrelationPropertyInfo</class>
    <class>org.jbpm.runtime.manager.impl.jpa.ContextMappingInfo</class>

    <class>org.jbpm.process.audit.ProcessInstanceLog</class>
    <class>org.jbpm.process.audit.NodeInstanceLog</class>
    <class>org.jbpm.process.audit.VariableInstanceLog</class>

    <properties>
      <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
      <property name="hibernate.max_fetch_depth" value="3"/>
      <property name="hibernate.hbm2ddl.auto" value="update"/>
      <property name="hibernate.show_sql" value="true"/>
      <property name="hibernate.transaction.jta.platform"
      value="org.hibernate.service.jta.platform.internal.BitronixJtaPlatform"/>
    </properties>
  </persistence-unit>
</persistence>
    

All this information can easily be queried and used in a lot of different use cases, ranging from creating a history log for one specific process instance to analyzing the performance of all instances of a specific process.

This audit log should only be considered a default implementation. We don't know what information you need to store for analysis afterwards, and for performance reasons it is recommended to only store the relevant data. Depending on your use cases, you might define your own data model for storing the information you need, and use the process event listeners to extract that information.

Process and task variables are stored in autdit tables by default although there are stored in simplest possible way - by creating string representation of the variable - variable.toString(). In many cases this is enough as even for custom classes used as variables users can implement custom toString() method that produces expected "view" of the variable.

Though this might not cover all needs, especially when there is a need for efficient queries by variables (both task and process). Let's take as an example a Person object that has following structure:

public class Person implements Serializable{

    private static final long serialVersionUID = -5172443495317321032L;
    private String name;
    private int age;   
    
    public Person(String name, int age) {
        this.name = name;
        this.age = age;
    }

    public String getName() {
        return name;
    }
    
    public void setName(String name) {
        this.name = name;
    }
    
    public int getAge() {
        return age;
    }
    
    public void setAge(int age) {
        this.age = age;
    }

    @Override
    public String toString() {
        return "Person [name=" + name + ", age=" + age + "]";
    }        
}

while at first look this seems to be sufficient as the toString() methods provide human readable format it does not make it easy to be searched by. As searching through strings like "Person [name="john", age="34"] to find people with age 34 would make data base query very inefficient.

To solve the problem variable audit has been based on VariableIndexers that are responsible for extracting relevant parts of the variable that will be stored in audit log.


/**
 * Variable indexer that allows to transform variable instance into other representation (usually string)
 * to be able to use it for queries.
 *
 * @param <V> type of the object that will represent indexed variable
 */
public interface VariableIndexer<V> {

    /**
     * Tests if given variable shall be indexed by this indexer
     * 
     * NOTE: only one indexer can be used for given variable
     * 
     * @param variable variable to be indexed
     * @return true if variable should be indexed with this indexer
     */
    boolean accept(Object variable);
    
    /**
     * Performs index/transform operation of the variable. Result of this operation can be
     * either single value or list of values to support complex type separation.
     * For example when variable is of type Person that has name, address phone indexer could 
     * build three entries out of it to represent individual fields:
     * person = person.name
     * address = person.address.street
     * phone = person.phone
     * that will allow more advanced queries to be used to find relevant entries.
     * @param name name of the variable
     * @param variable actual variable value 
     * @return
     */
    List<V> index(String name, Object variable);
}

By default (indexer that takes the toString()) will prodce single audit entry for single variable, so it's one to one relationship. But that's not the only option. Indexers (as can be seen in the interface) returns list of objects that are the outcome of single variable indexation. To make our person queries more efficient we could build custom indexer that would take Person instance and index it into separate audit entries one representing name and the other representing age.


public class PersonTaskVariablesIndexer implements TaskVariableIndexer {

    @Override
    public boolean accept(Object variable) {
        if (variable instanceof Person) {
            return true;
        }
        return false;
    }

    @Override
    public List<TaskVariable> index(String name, Object variable) {
        
        Person person = (Person) variable;
        List<TaskVariable> indexed = new ArrayList<TaskVariable>();
        
        TaskVariableImpl personNameVar = new TaskVariableImpl();
        personNameVar.setName("person.name");
        personNameVar.setValue(person.getName());
        
        indexed.add(personNameVar);
        
        TaskVariableImpl personAgeVar = new TaskVariableImpl();
        personAgeVar.setName("person.age");
        personAgeVar.setValue(person.getAge()+"");
        
        indexed.add(personAgeVar);
        
        return indexed;
    }

}

That indexer will then be used to index Person class only and rest of variables will be indexed with default (toString()) indexer. Now when we want to find process instances or tasks that have person with age 34 we simple refer to it as

there is not even need to use like based queries so data base can optimize the query and make it efficient even with big set of data.

Building and registering custom indexers

Indexers are supported for both process and task variables. though they are supported by different interfaces as they do produce different type of objects representing audit view of the variable. Following are the interfaces to be implemented to build custom indexers:

Implementation is rather simple, just two methods to be implemented

Once the implementation is done, it should be packaged as jar file and following file needs to be included:

Indexers are discovered by ServiceLoader mechanism and thus the META-INF/services files need. All found indexers will be examined whenever process or task variable is about to be indexed. Only the default (toString() based) indexer is not discovered but added explicitly as last indexer to allow custom ones to take the precedence over it.

The jBPM engine supports JTA transactions. It also supports local transactions only when using Spring. It does not support pure local transactions at the moment. For more information about using Spring to set up persistence, please see the Spring chapter in the Drools integration guide.

Whenever you do not provide transaction boundaries inside your application, the engine will automatically execute each method invocation on the engine in a separate transaction. If this behavior is acceptable, you don't need to do anything else. You can, however, also specify the transaction boundaries yourself. This allows you, for example, to combine multiple commands into one transaction.

You need to register a transaction manager at the environment before using user-defined transactions. The following sample code uses the Bitronix transaction manager. Next, we use the Java Transaction API (JTA) to specify transaction boundaries, as shown below:


// create the entity manager factory
EntityManagerFactory emf = EntityManagerFactoryManager.get().getOrCreate("org.jbpm.persistence.jpa");
TransactionManager tm = TransactionManagerServices.getTransactionManager();

// setup the runtime environment
RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
.newDefaultBuilder()
.addAsset(ResourceFactory.newClassPathResource("MyProcessDefinition.bpmn2"), ResourceType.BPMN2)
    .addEnvironmentEntry(EnvironmentName.TRANSACTION_MANAGER, tm)
    .get();

// get the kie session
RuntimeManager manager = RuntimeManagerFactory.Factory.get().newPerRequestRuntimeManager(environment);
RuntimeEngine runtime = manager.getRuntimeEngine(ProcessInstanceIdContext.get());
KieSession ksession = runtime.getKieSession();

// start the transaction
UserTransaction ut = InitialContext.doLookup("java:comp/UserTransaction");
ut.begin();

// perform multiple commands inside one transaction
ksession.insert( new Person( "John Doe" ) );
ksession.startProcess("MyProcess");

// commit the transaction
ut.commit();
    

Note that, if you use Bitronix as the transaction manager, you should also add a simple jndi.properties file in you root classpath to register the Bitronix transaction manager in JNDI. If you are using the jbpm-test module, this is already included by default. If not, create a file named jndi.properties with the following content:


java.naming.factory.initial=bitronix.tm.jndi.BitronixInitialContextFactory
    

If you would like to use a different JTA transaction manager, you can change the persistence.xml file to use your own transaction manager. For example, when running inside JBoss Application Server v5.x or v7.x, you can use the JBoss transaction manager. You need to change the transaction manager property in persistence.xml to:


<property name="hibernate.transaction.jta.platform" value="org.hibernate.transaction.JBossTransactionManagerLookup" />
    

Special consideration need to be taken when embedding jBPM inside an application that executes in Container Managed Transaction (CMT) mode, for instance EJB beans. This especially applies to application servers that does not allow accessing UserTransaction instance from JNDI when being part of container managed transaction, e.g. WebSphere Application Server. Since default implementation of transaction manager in jBPM is based on UserTransaction to get transaction status which is used to decide if transaction should be started or not, in environments that prevent accessing UserTrancation it won't do its job. To secure proper execution in CMT environments a dedicated transaction manager implementation is provided:

org.jbpm.persistence.jta.ContainerManagedTransactionManager

This transaction manager expects that transaction is active and thus will always return ACTIVE when invoking getStatus method. Operations like begin, commit, rollback are no-op methods as transaction manager runs under managed transaction and can't affect it.

To configure this transaction manager following must be done:

With following configuration jBPM should run properly in CMT environment.

By default, the engine does not save runtime data persistently. This means you can use the engine completely without persistence (so not even requiring an in memory database) if necessary, for example for performance reasons, or when you would like to manage persistence yourself. It is, however, possible to configure the engine to do use persistence by configuring it to do so. This usually requires adding the necessary dependencies, configuring a datasource and creating the engine with persistence configured.

You need to make sure the necessary dependencies are available in the classpath of your application if you want to user persistence. By default, persistence is based on the Java Persistence API (JPA) and can thus work with several persistence mechanisms. We are using Hibernate by default.

If you're using the Eclipse IDE and the jBPM Eclipse plugin, you should make sure the necessary JARs are added to your jBPM runtime directory. You don't really need to do anything (as the necessary dependencies should already be there) if you are using the jBPM runtime that is configured by default when using the jBPM installer, or if you downloaded and unzipped the jBPM runtime artifact (from the downloads) and pointed the jBPM plugin to that directory.

If you would like to manually add the necessary dependencies to your project, first of all, you need the JAR file jbpm-persistence-jpa.jar, as that contains code for saving the runtime state whenever necessary. Next, you also need various other dependencies, depending on the persistence solution and database you are using. For the default combination with Hibernate as the JPA persistence provider and using an H2 in-memory database and Bitronix for JTA-based transaction management, the following list of additional dependencies is needed:

You can use the JPAKnowledgeService to create your knowledge session. This is slightly more complex, but gives you full access to the underlying configurations. You can create a new knowledge session using JPAKnowledgeService based on a knowledge base, a knowledge session configuration (if necessary) and an environment. The environment needs to contain a reference to your Entity Manager Factory. For example:


// create the entity manager factory and register it in the environment
EntityManagerFactory emf =
    Persistence.createEntityManagerFactory( "org.jbpm.persistence.jpa" );
Environment env = KnowledgeBaseFactory.newEnvironment();
env.set( EnvironmentName.ENTITY_MANAGER_FACTORY, emf );

// create a new knowledge session that uses JPA to store the runtime state
StatefulKnowledgeSession ksession = JPAKnowledgeService.newStatefulKnowledgeSession( kbase, null, env );
int sessionId = ksession.getId();

// invoke methods on your method here
ksession.startProcess( "MyProcess" );
ksession.dispose();

You can also use the JPAKnowledgeService to recreate a session based on a specific session id:


// recreate the session from database using the sessionId
ksession = JPAKnowledgeService.loadStatefulKnowledgeSession(sessionId, kbase, null, env );

Note that we only save the minimal state that is needed to continue execution of the process instance at some later point. This means, for example, that it does not contain information about already executed nodes if that information is no longer relevant, or that process instances that have been completed or aborted are removed from the database. If you want to search for history-related information, you should use the history log, as explained later.

You need to add a persistence configuration to your classpath to configure JPA to use Hibernate and the H2 database (or your own preference), called persistence.xml in the META-INF directory, as shown below. For more details on how to change this for your own configuration, we refer to the JPA and Hibernate documentation for more information.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<persistence
      version="2.0"
      xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd
      http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd"
      xmlns="http://java.sun.com/xml/ns/persistence"
      xmlns:orm="http://java.sun.com/xml/ns/persistence/orm"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance>

  <persistence-unit name="org.jbpm.persistence.jpa" transaction-type="JTA">
    <provider>org.hibernate.ejb.HibernatePersistence</provider>
    <jta-data-source>jdbc/jbpm-ds</jta-data-source>
    <mapping-file>META-INF/JBPMorm.xml</mapping-file>
    <class>org.drools.persistence.info.SessionInfo</class>
    <class>org.jbpm.persistence.processinstance.ProcessInstanceInfo</class>
    <class>org.drools.persistence.info.WorkItemInfo</class>
    <class>org.jbpm.persistence.correlation.CorrelationKeyInfo</class>
    <class>org.jbpm.persistence.correlation.CorrelationPropertyInfo</class>
    <class>org.jbpm.runtime.manager.impl.jpa.ContextMappingInfo</class>

    <properties>
      <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
      <property name="hibernate.max_fetch_depth" value="3"/>
      <property name="hibernate.hbm2ddl.auto" value="update"/>
      <property name="hibernate.show_sql" value="true"/>
      <property name="hibernate.transaction.jta.platform"
                value="org.hibernate.service.jta.platform.internal.BitronixJtaPlatform"/>
    </properties>
  </persistence-unit>
</persistence>
    

This configuration file refers to a data source called "jdbc/jbpm-ds". If you run your application in an application server (like for example JBoss AS), these containers typically allow you to easily set up data sources using some configuration (like for example dropping a datasource configuration file in the deploy directory). Please refer to your application server documentation to know how to do this.

For example, if you're deploying to JBoss Application Server v5.x, you can create a datasource by dropping a configuration file in the deploy directory, for example:

<?xml version="1.0" encoding="UTF-8"?>
<datasources>
  <local-tx-datasource>
    <jndi-name>jdbc/jbpm-ds</jndi-name>
    <connection-url>jdbc:h2:tcp://localhost/~/test</connection-url>
    <driver-class>org.h2.jdbcx.JdbcDataSource</driver-class>
    <user-name>sa</user-name>
    <password></password>
  </local-tx-datasource>
</datasources>

If you are however executing in a simple Java environment, you can use the JBPMHelper class to do this for you (see below for tests only) or the following code fragment could be used to set up a data source (where we are using the H2 in-memory database in combination with Bitronix in this case).


PoolingDataSource ds = new PoolingDataSource();
ds.setUniqueName("jdbc/jbpm-ds");
ds.setClassName("bitronix.tm.resource.jdbc.lrc.LrcXADataSource");
ds.setMaxPoolSize(3);
ds.setAllowLocalTransactions(true);
ds.getDriverProperties().put("user", "sa");
ds.getDriverProperties().put("password", "sasa");
ds.getDriverProperties().put("URL", "jdbc:h2:mem:jbpm-db");
ds.getDriverProperties().put("driverClassName", "org.h2.Driver");
ds.init();

You need to configure the jBPM engine to use persistence, usually simply by using the appropriate constructor when creating your session. There are various ways to create a session (as we have tried to make this as easy as possible for you and have several utility classes for you, depending for example if you are trying to write a process JUnit test).

The easiest way to do this is to use the jbpm-test module that allows you to easily create and test your processes. The JBPMHelper class has a method to create a session, and uses a configuration file to configure this session, like whether you want to use persistence, the datasource to use, etc. The helper class will then do all the setup and configuration for you.

To configure persistence, create a jBPM.properties file and configure the following properties (note that the example below are the default properties, using an H2 in-memory database with persistence enabled, if you are fine with all of these properties, you don't need to add new properties file, as it will then use these properties by default):


# for creating a datasource
persistence.datasource.name=jdbc/jbpm-ds
persistence.datasource.user=sa
persistence.datasource.password=
persistence.datasource.url=jdbc:h2:tcp://localhost/~/jbpm-db
persistence.datasource.driverClassName=org.h2.Driver

# for configuring persistence of the session
persistence.enabled=true
persistence.persistenceunit.name=org.jbpm.persistence.jpa
persistence.persistenceunit.dialect=org.hibernate.dialect.H2Dialect

# for configuring the human task service
taskservice.enabled=true
taskservice.datasource.name=org.jbpm.task
taskservice.usergroupcallback=org.jbpm.services.task.identity.JBossUserGroupCallbackImpl
taskservice.usergroupmapping=classpath:/usergroups.properties
      

If you want to use persistence, you must make sure that the datasource (that you specified in the jBPM.properties file) is initialized correctly. This means that the database itself must be up and running, and the datasource should be registered using the correct name. If you would like to use an H2 in-memory database (which is usually very easy to do some testing), you can use the JBPMHelper class to start up this database, using:


JBPMHelper.startH2Server();
      

To register the datasource (this is something you always need to do, even if you're not using H2 as your database, check below for more options on how to configure your datasource), use:


JBPMHelper.setupDataSource();
      

Next, you can use the JBPMHelper class to create your session (after creating your knowledge base, which is identical to the case when you are not using persistence):


StatefulKnowledgeSession ksession = JBPMHelper.newStatefulKnowledgeSession(kbase);
      

Once you have done that, you can just call methods on this ksession (like startProcess) and the engine will persist all runtime state in the created datasource.

You can also use the JBPMHelper class to recreate your session (by restoring its state from the database, by passing in the session id (that you can retrieve using ksession.getId())):


StatefulKnowledgeSession ksession = JBPMHelper.loadStatefulKnowledgeSession(kbase, sessionId);
      

How to use the web-based Workbench

Table of Contents

9. Workbench
9.1. Installation
9.1.1. War installation
9.1.2. Workbench data
9.1.3. System properties
9.1.4. Trouble shooting
9.2. Quick Start
9.2.1. Add repository
9.2.2. Add project
9.2.3. Define Data Model
9.2.4. Define Rule
9.2.5. Build and Deploy
9.3. Administration
9.3.1. Administration overview
9.3.2. Organizational unit
9.3.3. Repositories
9.4. Configuration
9.4.1. User management
9.4.2. Roles
9.4.3. Restricting access to repositories
9.4.4. Command line config tool
9.5. Introduction
9.5.1. Log in and log out
9.5.2. Home screen
9.5.3. Workbench concepts
9.5.4. Initial layout
9.6. Changing the layout
9.6.1. Resizing
9.6.2. Repositioning
9.7. Authoring
9.7.1. Artifact Repository
9.7.2. Asset Editor
9.7.3. Tags Editor
9.7.4. Project Explorer
9.7.5. Project Editor
9.7.6. Validation
9.7.7. Data Modeller
9.7.8. Data Sets
9.8. Embedding Workbench In Your Application
9.9. Asset Management
9.9.1. Asset Management Overview
9.9.2. Managed vs Unmanaged Repositories
9.9.3. Asset Management Processes
9.9.4. Usage Flow
9.9.5. Repository Structure
9.9.6. Managed Repositories Operations
9.9.7. Remote APIs
10. Workbench Integration
10.1. REST
10.1.1. Job calls
10.1.2. Repository calls
10.1.3. Organizational unit calls
10.1.4. Maven calls
10.1.5. REST summary
11. Workbench High Availability
11.1.
11.1.1. VFS clustering
11.1.2. jBPM clustering
12. Designer
12.1. Designer UI Explained
12.2. Getting started with Modelling
12.3. Designer Toolbar
13. Forms
13.1. Configure process and human tasks
13.2. Generate forms from task definitions
13.3. Edit forms
13.3.1. Form generated description
13.3.2. Customizing form
13.3.3. Field types
13.4. Document attachments
13.4.1. Process and forms configuration
13.4.2. Marshalling strategy and deployment configuration
13.5. Using forms on client applications
13.5.1. What does the API provides?
13.5.2. Sample usage
14. Runtime Management
14.1. Deployments
14.1.1. Deployment descriptors
14.2. Process Deployments
14.3. Jobs
15. Process and Task Management
15.1. Process Management
15.1.1. Process Definitions
15.1.2. Process Instances
15.2. Tasks
15.2.1. Task List
15.2.2. New Task (Ad-Hoc Task)
16. Business Activity Monitoring
16.1. Overview
16.2. Business Dashboards
16.3. Process Dashboard
16.3.1. Task Dashboard
17. Remote API
17.1. Remote Java API
17.1.1. Remote REST Java API Client Configuration
17.1.2. Remote JMS Java API Client Configuration
17.1.3. Remote CommandWebService Java API Client Configuration
17.1.4. Supported methods
17.2. REST
17.2.1. REST permissions
17.2.2. Runtime calls
17.2.3. Task calls
17.2.4. Deployment Calls
17.2.5. Deployment call details
17.2.6. Execute calls
17.2.7. REST summary
17.3. REST Query API
17.3.1. Query URL layout
17.3.2. Query Parameters
17.3.3. Parameter Table
17.3.4. Parameter examples
17.3.5. Query Output Format
17.4. JMS
17.4.1. JMS Queue setup
17.4.2. Using the remote Java API
17.4.3. Example JMS usage
17.5. Additional Information
17.5.1. REST Serialization: JAXB or JSON
17.5.2. Sending and receiving user class instances
17.5.3. Including the deployment id
17.5.4. REST Pagination
17.5.5. REST Map query parameters
17.5.6. REST Number query parameters
17.5.7. Runtime strategies

Here's a list of all system properties:

To change one of these system properties in a WildFly or JBoss EAP cluster:

These steps help you get started with minimum of effort.

They should not be a substitute for reading the documentation in full.

Provides capabilities to manage the system repository from command line. System repository contains the data about general workbench settings: how editors behave, organizational groups, security and other settings that are not editable by the user. System repository exists in the .niogit folder, next to all the repositories that have been created or cloned into the workbench.

Projects often need external artifacts in their classpath in order to build, for example a domain model JARs. The artifact repository holds those artifacts.

The Artifact Repository is a full blown Maven repository. It follows the semantics of a Maven remote repository: all snapshots are timestamped. But it is often stored on the local hard drive.

By default the artifact repository is stored under $WORKING_DIRECTORY/repositories/kie, but it can be overridden with the system property -Dorg.guvnor.m2repo.dir. There is only 1 Maven repository per installation.

The Artifact Repository screen shows a list of the artifacts in the Maven repository:

To add a new artifact to that Maven repository, either:

  • Use the upload button and select a JAR. If the JAR contains a POM file under META-INF/maven (which every JAR build by Maven has), no further information is needed. Otherwise, a groupId, artifactId and version need be given too.

  • Using Maven, mvn deploy to that Maven repository. Refresh the list to make it show up.

Note

This remote Maven repository is relatively simple. It does not support proxying, mirroring, ... like Nexus or Archiva.

The Asset Editor is the principle component of the workbench User-Interface. It consists of two main views Editor and Overview.

The Project Explorer provides the ability to browse different Organizational Units, Repositories, Projects and their files.

The Project Editor screen can be accessed from Project Explorer. Project Editor shows the settings for the currently active project.

Unlike most of the workbench editors, project editor edits more than one file. Showing everything that is needed for configuring the KIE project in one place.


By default, a data model is always constrained to the context of a project. For the purpose of this tutorial, we will assume that a correctly configured project already exists and the authoring perspective is open.

To start the creation of a data model inside a project, take the following steps:

This will start up the Data Modeller tool, which has the following general aspect:


The "Editor" tab is divided into the following sections:

The "Source" tab shows an editor that allows the visualization and modification of the generated java code.

The "Overview" tab shows the standard metadata and version information as the other workbench editors.

Once the data object has been created, it now has to be completed by adding user-defined properties to its definition. This can be achieved by pressing the "add field" button. The "New Field" dialog will be opened and the new field can be created by pressing the "Create" button. The "Create and continue" button will also add the new field to the Data Object, but won't close the dialog. In this way multiple fields can be created avoiding the popup opening multiple times. The following fields can (or must) be filled out:

When finished introducing the initial information for a new field, clicking the 'Create' button will add the newly created field to the end of the data object's fields table below:


The new field will also automatically be selected in the data object's field list, and its properties will be shown in the Field general properties editor. Additionally the field properties will be loaded in the different tool windows, in this way the field will be ready for edition in whatever selected tool window.

At any time, any field (without restrictions) can be deleted from a data object definition by clicking on the corresponding 'x' icon in the data object's fields table.

As stated before, both Data Objects as well as Fields require some of their initial properties to be set upon creation. Additionally there are three domains of properties that can be configured for a given Data Object. A domain is basically a set of properties related to a given business area. Current available domains are, "Drools & jJBPM", "Persistence" and the "Advanced" domain. To work on a given domain the user should select the corresponding "Tool window" (see below) on the right side toolbar. Every tool window usually provides two editors, the "Data Object" level editor and the "Field" level editor, that will be shown depending on the last selected item, the Data Object or the Field.

The Drools & jBPM domain editors manages the set of Data Object or Field properties related to drools applications.

The Drools & jBPM object editor manages the object level drools properties


  • TypeSafe: this property allows to enable/disable the type safe behaviour for current type. By default all type declarations are compiled with type safety enabled. (See Drools for more information on this matter).

  • ClassReactive: this property allows to mark this type to be treated as "Class Reactive" by the Drools engine. (See Drools for more information on this matter).

  • PropertyReactive: this property allows to mark this type to be treated as "Property Reactive" by the Drools engine. (See Drools for more information on this matter).

  • Role: this property allows to configure how the Drools engine should handle instances of this type: either as regular facts or as events. By default all types are handled as a regular fact, so for the time being the only value that can be set is "Event" to declare that this type should be handled as an event. (See Drools Fusion for more information on this matter).

  • Timestamp: this property allows to configure the "timestamp" for an event, by selecting one of his attributes. If set the engine will use the timestamp from the given attribute instead of reading it from the Session Clock. If not, the engine will automatically assign a timestamp to the event. (See Drools Fusion for more information on this matter).

  • Duration: this property allows to configure the "duration" for an event, by selecting one of his attributes. If set the engine will use the duration from the given attribute instead of using the default event duration = 0. (See Drools Fusion for more information on this matter).

  • Expires: this property allows to configure the "time offset" for an event expiration. If set, this value must be a temporal interval in the form: [#d][#h][#m][#s][#[ms]] Where [ ] means an optional parameter and # means a numeric value. e.g.: 1d2h, means one day and two hours. (See Drools Fusion for more information on this matter).

  • Remotable: If checked this property makes the Data Object available to be used with jBPM remote services as REST, JMS and WS. (See jBPM for more information on this matter).

The Persistence domain editors manages the set of Data Object or Field properties related to persistence.

The persistence domain field editor manages the field level persistence properties and is divided in three sections.


A persistable Data Object should have one and only one field defined as the Data Object identifier. The identifier is typically a unique number that distinguishes a given Data Object instance from all other instances of the same class.

When the field's type is a Data Object type, or a list of a Data Object type a relationship type should be set in order to let the persistence provider to manage the relation. Fortunately this relation type is automatically set when such kind of fields are added to an already marked as persistable Data Object. The relationship type is set by the following popup.


  • Relationship type: sets the type of relation from one of the following options:

    One to one: typically used for 1:1 relations where "A is related to one instance of B", and B exists only when A exists. e.g. PurchaseOrder -> PurchaseOrderHeader (a PurchaseOrderHeader exists only if the PurchaseOrder exists)

    One to many: typically used for 1:N relations where "A is related to N instances of B", and the related instances of B exists only when A exists. e.g. PurchaseOrder -> PurchaseOrderLine (a PurchaseOrderLine exists only if the PurchaseOrder exists)

    Many to one: typically used for 1:1 relations where "A is related to one instance of B", and B can exist even without A. e.g. PurchaseOrder -> Client (a Client can exist in the database even without an associated PurchaseOrder)

    Many to many: typically used for N:N relations where "A can be related to N instances of B, and B can be related to M instances of A at the same time", and both B an A instances can exits in the database independently of the related instances. e.g. Course -> Student. (Course can be related to N Students, and a given Student can attend to M courses)

    When a field of type "Data Object" is added to a given persistable Data Object, the "Many to One" relationship type is generated by default.

    And when a field of type "list of Data Object" is added to a given persistable Data Object , the "One to Many" relationship is generated by default.

  • Cascade mode: Defines the set of cascadable operations that are propagated to the associated entity. The value cascade=ALL is equivalent to cascade={PERSIST, MERGE, REMOVE, REFRESH}. e.g. when A -> B, and cascade "PERSIST or ALL" is set, if A is saved, then B will be also saved.

    The by default cascade mode created by the data modeller is "ALL" and it's strongly recommended to use this mode when Data Objects are being used by jBPM processes and forms.

  • Fetch mode: Defines how related data will be fetched from database at reading time.

    EAGER: related data will be read at the same time. e.g. If A -> B, when A is read from database B will be read at the same time.

    LAZY: reading of related data will be delayed usually to the moment they are required. e.g. If PurchaseOrder -> PurchaseOrderLine the lines reading will be postponed until a method "getLines()" is invoked on a PurchaseOrder instance.

    The default fetch mode created by the data modeller is "EAGER" and it's strongly recommended to use this mode when Data Objects are being used by jBPM processes and forms.

  • Optional: establishes if the right side member of a relationship can be null.

  • Mapped by: used for reverse relations.

The advanced domain enables the configuration of whatever parameter set by the other domains as well as the adding of arbitrary parameters. As it will be shown in the code generation section every "Data Object / Field" parameter is represented by a java annotation. The advanced mode enables the configuration of this annotations.

The advanced domain editor has the same shape for both Data Object and Field.


The following operations are available

The data model in itself is merely a visual tool that allows the user to define high-level data structures, for them to interact with the Drools Engine on the one hand, and the jBPM platform on the other. In order for this to become possible, these high-level visual structures have to be transformed into low-level artifacts that can effectively be consumed by these platforms. These artifacts are Java POJOs (Plain Old Java Objects), and they are generated every time the data model is saved, by pressing the "Save" button in the top Data Modeller Menu. Additionally when the user round trip between the "Editor" and "Source" tab, the code is auto generated to maintain the consistency with the Editor view and vice versa.


The resulting code is generated according to the following transformation rules:

  • The data object's identifier property will become the Java class's name. It therefore needs to be a valid Java identifier.

  • The data object's package property becomes the Java class's package declaration.

  • The data object's superclass property (if present) becomes the Java class's extension declaration.

  • The data object's label and description properties will translate into the Java annotations "@org.kie.api.definition.type.Label" and "@org.kie.api.definition.type.Description", respectively. These annotations are merely a way of preserving the associated information, and as yet are not processed any further.

  • The data object's role property (if present) will be translated into the "@org.kie.api.definition.type.Role" Java annotation, that IS interpreted by the application platform, in the sense that it marks this Java class as a Drools Event Fact-Type.

  • The data object's type safe property (if present) will be translated into the "@org.kie.api.definition.type.TypeSafe Java annotation. (see Drools)

  • The data object's class reactive property (if present) will be translated into the "@org.kie.api.definition.type.ClassReactive Java annotation. (see Drools)

  • The data object's property reactive property (if present) will be translated into the "@org.kie.api.definition.type.PropertyReactive Java annotation. (see Drools)

  • The data object's timestamp property (if present) will be translated into the "@org.kie.api.definition.type.Timestamp Java annotation. (see Drools)

  • The data object's duration property (if present) will be translated into the "@org.kie.api.definition.type.Duration Java annotation. (see Drools)

  • The data object's expires property (if present) will be translated into the "@org.kie.api.definition.type.Expires Java annotation. (see Drools)

  • The data object's remotable property (if present) will be translated into the "@org.kie.api.remote.Remotable Java annotation. (see jBPM)

A standard Java default (or no parameter) constructor is generated, as well as a full parameter constructor, i.e. a constructor that accepts as parameters a value for each of the data object's user-defined fields.

The data object's user-defined fields are translated into Java class fields, each one of them with its own getter and setter method, according to the following transformation rules:

  • The data object field's identifier will become the Java field identifier. It therefore needs to be a valid Java identifier.

  • The data object field's type is directly translated into the Java class's field type. In case the field was declared to be multiple (i.e. 'List'), then the generated field is of the "java.util.List" type.

  • The equals property: when it is set for a specific field, then this class property will be annotated with the "@org.kie.api.definition.type.Key" annotation, which is interpreted by the Drools Engine, and it will 'participate' in the generated equals() method, which overwrites the equals() method of the Object class. The latter implies that if the field is a 'primitive' type, the equals method will simply compares its value with the value of the corresponding field in another instance of the class. If the field is a sub-entity or a collection type, then the equals method will make a method-call to the equals method of the corresponding data object's Java class, or of the java.util.List standard Java class, respectively.

    If the equals property is checked for ANY of the data object's user defined fields, then this also implies that in addition to the default generated constructors another constructor is generated, accepting as parameters all of the fields that were marked with Equals. Furthermore, generation of the equals() method also implies that also the Object class's hashCode() method is overwritten, in such a manner that it will call the hashCode() methods of the corresponding Java class types (be it 'primitive' or user-defined types) for all the fields that were marked with Equals in the Data Model.

  • The position property: this field property is automatically set for all user-defined fields, starting from 0, and incrementing by 1 for each subsequent new field. However the user can freely changes the position among the fields. At code generation time this property is translated into the "@org.kie.api.definition.type.Position" annotation, which can be interpreted by the Drools Engine. Also, the established property order determines the order of the constructor parameters in the generated Java class.

As an example, the generated Java class code for the Purchase Order data object, corresponding to its definition as shown in the following figure purchase_example.jpg is visualized in the figure at the bottom of this chapter. Note that the two of the data object's fields, namely 'header' and 'lines' were marked with Equals, and have been assigned with the positions 2 and 1, respectively).




    package org.jbpm.examples.purchases;

    /**
    * This class was automatically generated by the data modeler tool.
    */
    @org.kie.api.definition.type.Label("Purchase Order")
    @org.kie.api.definition.type.TypeSafe(true)
    @org.kie.api.definition.type.Role(org.kie.api.definition.type.Role.Type.EVENT)
    @org.kie.api.definition.type.Expires("2d")
    @org.kie.api.remote.Remotable
    public class PurchaseOrder implements java.io.Serializable
    {

    static final long serialVersionUID = 1L;

    @org.kie.api.definition.type.Label("Total")
    @org.kie.api.definition.type.Position(3)
    private java.lang.Double total;

    @org.kie.api.definition.type.Label("Description")
    @org.kie.api.definition.type.Position(0)
    private java.lang.String description;

    @org.kie.api.definition.type.Label("Lines")
    @org.kie.api.definition.type.Position(2)
    @org.kie.api.definition.type.Key
    private java.util.List<org.jbpm.examples.purchases.PurchaseOrderLine> lines;

    @org.kie.api.definition.type.Label("Header")
    @org.kie.api.definition.type.Position(1)
    @org.kie.api.definition.type.Key
    private org.jbpm.examples.purchases.PurchaseOrderHeader header;

    @org.kie.api.definition.type.Position(4)
    private java.lang.Boolean requiresCFOApproval;

    public PurchaseOrder()
    {
    }

    public java.lang.Double getTotal()
    {
    return this.total;
    }

    public void setTotal(java.lang.Double total)
    {
    this.total = total;
    }

    public java.lang.String getDescription()
    {
    return this.description;
    }

    public void setDescription(java.lang.String description)
    {
    this.description = description;
    }

    public java.util.List<org.jbpm.examples.purchases.PurchaseOrderLine> getLines()
    {
    return this.lines;
    }

    public void setLines(java.util.List<org.jbpm.examples.purchases.PurchaseOrderLine> lines)
    {
    this.lines = lines;
    }

    public org.jbpm.examples.purchases.PurchaseOrderHeader getHeader()
    {
    return this.header;
    }

    public void setHeader(org.jbpm.examples.purchases.PurchaseOrderHeader header)
    {
    this.header = header;
    }

    public java.lang.Boolean getRequiresCFOApproval()
    {
    return this.requiresCFOApproval;
    }

    public void setRequiresCFOApproval(java.lang.Boolean requiresCFOApproval)
    {
    this.requiresCFOApproval = requiresCFOApproval;
    }

    public PurchaseOrder(java.lang.Double total, java.lang.String description,
    java.util.List<org.jbpm.examples.purchases.PurchaseOrderLine> lines,
    org.jbpm.examples.purchases.PurchaseOrderHeader header,
    java.lang.Boolean requiresCFOApproval)
    {
    this.total = total;
    this.description = description;
    this.lines = lines;
    this.header = header;
    this.requiresCFOApproval = requiresCFOApproval;
    }

    public PurchaseOrder(java.lang.String description,
    org.jbpm.examples.purchases.PurchaseOrderHeader header,
    java.util.List<org.jbpm.examples.purchases.PurchaseOrderLine> lines,
    java.lang.Double total, java.lang.Boolean requiresCFOApproval)
    {
    this.description = description;
    this.header = header;
    this.lines = lines;
    this.total = total;
    this.requiresCFOApproval = requiresCFOApproval;
    }

    public PurchaseOrder(
    java.util.List<org.jbpm.examples.purchases.PurchaseOrderLine> lines,
    org.jbpm.examples.purchases.PurchaseOrderHeader header)
    {
    this.lines = lines;
    this.header = header;
    }

    @Override
    public boolean equals(Object o)
    {
    if (this == o)
    return true;
    if (o == null || getClass() != o.getClass())
    return false;
    org.jbpm.examples.purchases.PurchaseOrder that = (org.jbpm.examples.purchases.PurchaseOrder) o;
    if (lines != null ? !lines.equals(that.lines) : that.lines != null)
    return false;
    if (header != null ? !header.equals(that.header) : that.header != null)
    return false;
    return true;
    }

    @Override
    public int hashCode()
    {
    int result = 17;
    result = 31 * result + (lines != null ? lines.hashCode() : 0);
    result = 31 * result + (header != null ? header.hashCode() : 0);
    return result;
    }

    }

  

Using an external model means the ability to use a set for already defined POJOs in current project context. In order to make those POJOs available a dependency to the given JAR should be added. Once the dependency has been added the external POJOs can be referenced from current project data model.

There are two ways to add a dependency to an external JAR file:

Current version implements roundtrip and code preservation between Data modeller and Java source code. No matter where the Java code was generated (e.g. Eclipse, Data modeller), the data modeller will only create/delete/update the necessary code elements to maintain the model updated, i.e, fields, getter/setters, constructors, equals method and hashCode method. Also whatever Type or Field annotation not managed by the Data Modeler will be preserved when the Java sources are updated by the Data modeller.

Aside from code preservation, like in the other workbench editors, concurrent modification scenarios are still possible. Common scenarios are when two different users are updating the model for the same project, e.g. using the data modeller or executing a 'git push command' that modifies project sources.

From an application context's perspective, we can basically identify two different main scenarios:

A data set is basically a set of columns populated with some rows, a matrix of data composed of timestamps, texts and numbers. A data set can be stored in different systems: a database, an excel file, in memory or in a lot of other different systems. On the other hand, a data set definition tells the workbench modules how such data can be accessed, read and parsed.

Notice, it's very important to make crystal clear the difference between a data set and its definition since the workbench does not take care of storing any data, it just provides an standard way to define access to those data sets regardless where the data is stored.

Let's take for instance the data stored in a remote database. A valid data set could be, for example, an entire database table or the result of an SQL query. In both cases, the database will return a bunch of columns and rows. Now, imagine we want to get access to such data to feed some charts in a new workbench perspective. First thing is to create and register a data set definition in order to indicate the following:

This chapter introduces the available workbench tools for registering and handling data set definitions and how this definitions can be consumed in other workbench modules like, for instance, the Perspective Editor.

Clicking on the New Data Set button opens a new screen from which the user is able to create a new data set definition in three steps:

After clicking on the Test button (see previous step), the system executes a data set lookup test call in order to check if the remote system is up and the data is available. If everything goes ok the user will see the following screen:


This screen shows a live data preview along with the columns the user wants to be part of the resulting data set. The user can also navigate through the data and apply some changes to the data set structure. Once finished, we can click on the Save button in order to register the new data set definition.

We can also change the configuration settings at any time just by going back to the configuration tab. We can repeat the Configuration>Test>Preview cycle as may times as needed until we consider it's ready to be saved.

Columns

In the Columns tab area the user can select what columns are part of the resulting data set definition.


  • (1) To add or remove columns. Select only those columns you want to be part of the resulting data set

  • (2) Use the drop down image selector to change the column type

A data set may only contain columns of any of the following 4 types:

  • Label - For text values supporting group operations (similar to the SQL "group by" operator) which means you can perform data lookup calls and get one row per distinct value.

  • Text - For text values NOT supporting group operations. Typically for modeling large text columns such as abstracts, descriptions and the like.

  • Number - For numeric values. It does support aggregation functions on data lookup calls: sum, min, max, average, count, disctinct.

  • Date - For date or timestamp values. It does support time based group operations by different time intervals: minute, hour, day, month, year, ...

No matter which remote system you want to retrieve data from, the resulting data set will always return a set of columns of one of the four types above. There exists, by default, a mapping between the remote system column types and the data set types. The user is able to modify the type for some columns, depending on the data provider and the column type of the remote system. The system supports the following changes to column types:

  • Label <> Text - Useful when we want to enable/disable the categorization (grouping) for the target column. For instance, imagine a database table called "document" containing a large text column called "abstract". As we do not want the system to treat such column as a "label" we might change its column type to "text". Doing so, we are optimizing the way the system handles the data set and

  • Number <> Label - Useful when we want to treat numeric columns as labels. This can be used for instance to indicate that a given numeric column is not a numeric value that can be used in aggregation functions. Despite its values are stored as numbers we want to handle the column as a "label". One example of such columns are: an item's code, an appraisal id., ...

Note

BEAN data sets do not support changing column types as it's up to the developer to decide which are the concrete types for each column.

Filter

A data set definition may define a filter. The goal of the filter is to leave out rows the user does not consider necessary. The filter feature works on any data provider type and it lets the user to apply filter operations on any of the data set columns available.


While adding or removing filter conditions and operations, the preview table on central area is updated with live data that reflects the current filter status.

There exists two strategies for filtering data sets and it's also important to note that choosing between the two have important implications. Imagine a dashboard with some charts feeding from a expense reports data set where such data set is built on top of an SQL table. Imagine also we only want to retrieve the expense reports from the "London" office. You may define a data set containing the filter "office=London" and then having several charts feeding from such data set. This is the recommended approach. Another option is to define a data set with no initial filter and then let the individual charts to specify their own filter. It's up to the user to decide on the best approach.

Depending on the case it might be better to define the filter at a data set level for reusing across other modules. The decision may also have impact on the performance since a filtered cached data set will have far better performance than a lot of individual non-cached data set lookup requests. (See the next section for more information about caching data sets).

Note

Notice, for SQL data sets, the user can use both the filter feature introduced or, alternatively, just add custom filter criteria to the SQL sentence. Although, the first approach is more appropriated for non technical users since they might not have the required SQL language skills.

The system provides caching mechanisms out-of-the-box for holding data sets and performing data operations using in-memory strategies. The use of these features brings a lot of advantages, like reducing the network traffic, remote system payload, processing times etc. On the other hand, it's up to the user to fine tune properly the caching settings to avoid hitting performance issues.

Two cache levels are supported:

The following diagram shows how caching is involved in any data set operation:


Any data look up call produces a resulting data set, so the use of the caching techniques determines where the data lookup calls are executed and where the resulting data set is located.

Client cache

If ON then the data set involved in a look up operation is pushed into the web browser so that all the components that feed from this data set do not need to perform any requests to the backend since data set operations are resolved at a client side:

  • The data set is stored in the web browser's memory

  • The client components feed from the data set stored in the browser

  • Data set operations (grouping, aggregations, filters and sort) are processed within the web browser, by means of a Javascript data set operation engine.

If you know beforehand that your data set will remain small, you can enable the client cache. It will reduce the number of backend requests, including the requests to the storage system. On the other hand, if you consider that your data set will be quite big, disable the client cache so as to not hitting with browser issues such as slow performance or intermittent hangs.

Backend cache

Its goal is to provide a caching mechanism for data sets on backend side.

This feature allows to reduce the number of requests to the remote storage system , by holding the data set in memory and performing group, filter and sort operations using the in-memory engine.

It's useful for data sets that do not change very often and their size can be considered acceptable to be held and processed in memory. It can be also helpful on low latency connectivity issues with the remote storage. On the other hand, if your data set is going to be updated frequently, it's better to disable the backend cache and perform the requests to the remote storage on each look up request, so the storage system is in charge of resolving the data set lookup request.

Note

BEAN and CSV data providers relies by default on the backend cache, as in both cases the data set must be always loaded into memory in order to resolve any data lookup operation using the in-memory engine. This is the reason why the backend settings are not visible in the Advanced settings tab.

The refresh feature allows for the invalidation of any cached data when certain conditions are meet.


  • (1) To enable or disable the refresh feature.

  • (2) To specify the refresh interval.

  • (3) To enable or disable data set invalidation when the data is outdated.

The data set refresh policy is tightly related to data set caching, detailed in previous section. This invalidation mechanism determines the cache life-cycle.

Depending on the nature of the data there exist three main use cases:

  • Source data changes predictable - Imagine a database being updated every night. In that case, the suggested configuration is to use a "refresh interval = 1 day" and disable "refresh on stale data". That way, the system will always invalidate the cached data set every day. This is the right configuration when we know in advance that the data is going to change.

  • Source data changes unpredictable - On the other hand, if we do not know whether the database is updated every day, the suggested configuration is to use a "refresh interval = 1 day" and enable "refresh on stale data". If so the system, before invalidating any data, will check for modifications. On data modifications, the system will invalidate the current stale data set so that the cache is populated with fresh data on the next data set lookup call.

  • Real time scenarios - In real time scenarios caching makes no sense as data is going to be updated constantly. In this kind of scenarios the data sent to the client has to be constantly updated, so rather than enabling the refresh settings (remember this settings affect the caching, and caching is not enabled) it's up to the clients consuming the data set to decide when to refresh. When the client is a dashboard then it's just a matter of modifying the refresh settings in the Displayer Editor configuration screen and set a proper refresh period, "refresh interval = 1 second" for example.

There are 4 main processes which represent the stages of the Asset Management feature: Configure Repository, Promote Changes, Build and Release.

REST API calls to Knowledge Store allow you to manage the Knowledge Store content and manipulate the static data in the repositories of the Knowledge Store. The calls are asynchronous, that is, they continue their execution after the call was performed as a job. The job ID is returned by every calls to allow after the REST API call was performed to request the job status and verify whether the job finished successfully. Parameters of these calls are provided in the form of JSON entities.

When using Java code to interface with the REST API, the classes used in POST operations or otherwise returned by various operations can be found in the (org.kie.workbench.services:)kie-wb-common-services JAR. All of the classes mentioned below can be found in the org.kie.workbench.common.services.shared.rest package in that JAR.

Every Knowledge Store REST call returns its job ID after it was sent. This is necessary as the calls are asynchronous and you need to be able to reference the job to check its status as it goes through its lifecycle. During its lifecycle, a job can have the following statuses:

The following job calls are provided:

Repository calls are calls to the Knowledge Store that allow you to manage its Git repositories and their projects.

The following repositories calls are provided:

[GET] /repositories

Gets information about the repositories in the Knowledge Store

Returns a Collection<Map<String, String>> or Collection<RepositoryRequest> instance, depending on the JSON serialization library being used. The keys used in the Map<String, String> instance match the fields in the RepositoryRequest class


[GET] /repositories/{repositoryName}

Gets information about a repository

Returns a Map<String, String> or RepositoryRequest instance, depending on the JSON serialization library being used. The keys used in the Map<String, String> instance match the fields in the RepositoryRequest class


[POST] /repositories

Creates a new empty repository or a new repository cloned from an existing (git) repository

Consumes a RepositoryRequest instance

Returns a CreateOrCloneRepositoryRequest instance


[DELETE] /repositories/{repositoryName}

Removes the repository from the Knowledge Store

Returns a RemoveRepositoryRequest instance

[POST] /repositories/{repositoryName}/projects/

Creates a project in the repository

Consumes an Entity instance

Returns a CreateProjectRequest instance


[DELETE] /repositories/{repositoryName}/projects/

Deletes the project in the repository

Returns a DeleteProjectRequest instance

[GET] /repositories/{repositoryName}/projects/

Gets information about the projects

Returns a Collection<Map<String, String>> or Collection<ProjectResponse> instance, depending on the JSON serialization library being used. The keys used in the Map<String, String> instance match the fields in the ProjectResponse class


Organizational unit calls are calls to the Knowledge Store that allow you to manage its organizational units, so as to organize the connected Git repositories.

The following organizationalUnits calls are provided:

[POST] /organizationalunits

Creates an organizational unit in the Knowledge Store

Consumes an OrganizationalUnit instance

Returns a CreateOrganizationalUnitRequest instance


[GET] /organizationalunits/{orgUnitName}

Creates an organizational unit

Consumes an OrganizationalUnit instance

Returns a CreateOrganizationalUnitRequest instance


[POST] /organizationalunits/{orgUnitName}

Creates an organizational unit in the Knowledge Store

Consumes an UpdateOrganizationalUnit instance

Returns a UpdateOrganizationalUnitRequest instance


[DELETE] /organizationalunits/{organizationalUnitName}

Deletes a organizational unit

Returns a RemoveOrganizationalUnitRequest instance

[POST] /organizationalunits/{organizationalUnitName}/repositories/{repositoryName}

Adds the repository to the organizational unit

Returns a AddRepositoryToOrganizationalUnitRequest instance

[DELETE] /organizationalunits/{organizationalUnitName}/repositories/{repositoryName}

Removes the repository from the organizational unit

Returns a RemoveRepositoryFromOrganizationalUnitRequest instance

The URL templates in the table below are relative the following URL:


The VFS repositories (usually git repositories) stores all the assets (such as rules, decision tables, process definitions, forms, etc). If that VFS resides on each local server, then it must be kept in sync between all servers of a cluster.

Use Apache Zookeeper and Apache Helix to accomplish this. Zookeeper glues all the parts together. Helix is the cluster management component that registers all cluster details (nodes, resources and the cluster itself). Uberfire (on top of which Workbench is build) uses those 2 components to provide VFS clustering.

To create a VFS cluster:

  1. Download Apache Zookeeper and Apache Helix.

  2. Install both:

    1. Unzip Zookeeper into a directory ($ZOOKEEPER_HOME).

    2. In $ZOOKEEPER_HOME, copy zoo_sample.conf to zoo.conf

    3. Edit zoo.conf. Adjust the settings if needed. Usually only these 2 properties are relevant:

      # the directory where the snapshot is stored.
      dataDir=/tmp/zookeeper
      # the port at which the clients will connect
      clientPort=2181
    4. Unzip Helix into a directory ($HELIX_HOME).

  3. Configure the cluster in Zookeeper:

    1. Go to its bin directory:

      $ cd $ZOOKEEPER_HOME/bin
    2. Start the Zookeeper server:

      $ sudo ./zkServer.sh start

      If the server fails to start, verify that the dataDir (as specified in zoo.conf) is accessible.

    3. To review Zookeeper's activities, open zookeeper.out:

      $ cat $ZOOKEEPER_HOME/bin/zookeeper.out
  4. Configure the cluster in Helix:

    1. Go to its bin directory:

      $ cd $HELIX_HOME/bin
    2. Create the cluster:

      $ ./helix-admin.sh --zkSvr localhost:2181 --addCluster kie-cluster

      The zkSvr value must match the used Zookeeper server. The cluster name (kie-cluster) can be changed as needed.

    3. Add nodes to the cluster:

      # Node 1
      $ ./helix-admin.sh --zkSvr localhost:2181 --addNode kie-cluster nodeOne:12345
      # Node 2
      $ ./helix-admin.sh --zkSvr localhost:2181 --addNode kie-cluster nodeTwo:12346
      ...

      Usually the number of nodes a in cluster equal the number of application servers in the cluster. The node names (nodeOne:12345 , ...) can be changed as needed.

      Note

      nodeOne:12345 is the unique identifier of the node, which will be referenced later on when configuring application servers. It is not a host and port number, but instead it is used to uniquely identify the logical node.

    4. Add resources to the cluster:

      $ ./helix-admin.sh --zkSvr localhost:2181 --addResource kie-cluster vfs-repo 1 LeaderStandby AUTO_REBALANCE

      The resource name (vfs-repo) can be changed as needed.

    5. Rebalance the cluster to initialize it:

      $ ./helix-admin.sh --zkSvr localhost:2181 --rebalance kie-cluster vfs-repo 2
    6. Start the Helix controller to manage the cluster:

      $  ./run-helix-controller.sh --zkSvr localhost:2181 --cluster kie-cluster 2>&1 > /tmp/controller.log &
  5. Configure the security domain correctly on the application server. For example on WildFly and JBoss EAP:

    1. Edit the file $JBOSS_HOME/domain/configuration/domain.xml.

      For simplicity sake, presume we use the default domain configuration which uses the profile full that defines two server nodes as part of main-server-group.

    2. Locate the profile full and add a new security domain by copying the other security domain already defined there by default:

      <security-domain name="kie-ide" cache-type="default">
          <authentication>
               <login-module code="Remoting" flag="optional">
                   <module-option name="password-stacking" value="useFirstPass"/>
               </login-module>
               <login-module code="RealmDirect" flag="required">
                   <module-option name="password-stacking" value="useFirstPass"/>
               </login-module>
          </authentication>
      </security-domain>

      Important

      The security-domain name is a magic value.

  6. Configure the system properties for the cluster on the application server. For example on WildFly and JBoss EAP:

    1. Edit the file $JBOSS_HOME/domain/configuration/host.xml.

    2. Locate the XML elements server that belong to the main-server-group and add the necessary system property.

      For example for nodeOne:

      <system-properties>
        <property name="jboss.node.name" value="nodeOne" boot-time="false"/>
        <property name="org.uberfire.nio.git.dir" value="/tmp/kie/nodeone" boot-time="false"/>
        <property name="org.uberfire.metadata.index.dir" value="/tmp/kie/nodeone" boot-time="false"/>
        <property name="org.uberfire.cluster.id" value="kie-cluster" boot-time="false"/>
        <property name="org.uberfire.cluster.zk" value="localhost:2181" boot-time="false"/>
        <property name="org.uberfire.cluster.local.id" value="nodeOne_12345" boot-time="false"/>
        <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" boot-time="false"/>
        <!-- If you're running both nodes on the same machine: -->
        <property name="org.uberfire.nio.git.daemon.port" value="9418" boot-time="false"/>
      </system-properties>

      And for nodeTwo:

      <system-properties>
        <property name="jboss.node.name" value="nodeTwo" boot-time="false"/>
        <property name="org.uberfire.nio.git.dir" value="/tmp/kie/nodetwo" boot-time="false"/>
        <property name="org.uberfire.metadata.index.dir" value="/tmp/kie/nodetwo" boot-time="false"/>
        <property name="org.uberfire.cluster.id" value="kie-cluster" boot-time="false"/>
        <property name="org.uberfire.cluster.zk" value="localhost:2181" boot-time="false"/>
        <property name="org.uberfire.cluster.local.id" value="nodeTwo_12346" boot-time="false"/>
        <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" boot-time="false"/>
        <!-- If you're running both nodes on the same machine: -->
        <property name="org.uberfire.nio.git.daemon.port" value="9419" boot-time="false"/>
      </system-properties>

      Make sure the cluster, node and resource names match those configured in Helix.

Designer is a graphical web-based BPMN2 editor. It allows users to model and simulate executable BPMN2 processes. The main goal of Designe is to provide intuitive means to both technical and non-technical users to quickly create their executable business processes. This chapter intends to describe all feature Designer offers currently.


Designer targets the following business process modelling scenarios:

  • View and/or edit existing BPMN2 processes: Designer allows you to open existing BPMN2 processes (for example created using the BPMN2 Eclipse editor or any other tooling that exports BPMN2 XML).

  • Create fully executable BPMN2 processes: A user can create a new BPMN2 process in the Designer and use the editing capabilities (drag and drop and filling in properties in the properties panel) to fill in the details. This for example allows business users to create complete business processes all inside a a browser. The integration with Drools Guvnor allows for your business processes as wells as other business assets such as business rules, process forms/images, etc. to be stored and versioned inside a content repository.

  • View and/or edit Human Task forms during process modelling (using the in-line form editor or the Form Modeller).

  • Simulate your business process models. Busines Process Simulation is based on the BPSIM 1.0 specification.

Designer supports all BPMN2 elements that are also supported by jBPM as well as all jBPM-specific BPMN2 extension elements and attributes.

Designer UI is composed of a number of sections as shown below:


  • (1) Modelling Canvas - this is your process drawing board. After dropping different shapes onto the canvas, you can move them around, connect them, etc. Clicking on a shape on the canvas allows you to set its properties in the expandable Properties Window (3) (as well as create connecting shapes and morph the shape into other shapes).

  • (2) Toolbar - the toolbar contains a vast number of functions offered by Designer (described later). These includes operations that can be performed on shapes present on the Canvas. Individual operations are disabled or enabled depending on what is selected. For example, if no shapes are selected, the Cut/Paste/Delete operations are disabled, and become enabled once you select a shape. Hovering over the icons in the Toolbar displays the description text of the operation.

  • (3) Properties Panel - this expandable section on the right side of Designer allows you to set both process and shape properties. It is divided in four sections, namely "Core properties", and "Extra Properties, "Graphical Settings", and "Simulation Properties" are is expandable. When clicking on a shape in the Canvas, this panel is reloaded to show properties specific to the shape type. If you click on the canvas itself (not on a shape) the section shows your general process properties.

  • (4) Object Repository Panel - the expandable section on the left side of Designer shows the jBPM BPMN2 (default) shape repository tree. It includes all shapes of the jBPM BPMN2 stencil set which can be used to assemble your processes. If you expand each section sub-group you can see the BPMN2 elements that can be placed onto the Designer Canvas (1) by dragging and dropping the shape onto it.

  • (5) View Tabs - currently Designer offers functionality tabs for Process Modelling and Simulation. Process Modelling is the default tab. When users run process simulation, its results are presented in the Simulation tab.

  • (6) Info Tabls - On the bottom Designer shows two different Info tabs. The Business Process tab includes the process modeling while the Metadata tab displays the process metadata such as created by and last modified information.

The Object Repository panel provide means for users to select and drag/drop BPMN2 shapes onto the modelling canvas. Shapes are divided into sections as shown below:


Once a shape is dropped onto the canvas users have a much faster way of continuing modelling without having to go back to the Object Repository panel. This is realized through the shape morphing menu which is presented when a shape on the drawing canvas is clicked on. This menu allows users to either select a connecting shape (next shape) or morph the selected node into another node type. In addition this menu includes means to store the shape name as a dictionary item (explained later), view the specific BPMN2 code of the selected shape, as well as create/edit the task form (in the case of user tasks only).


When connecting shapes Designer applies connection rules that follow the BPMN2 specification. The connection shapes presented in the morphing menu only show shapes that are allowed to be connections. Similarly same rules are applied when dropping a shape from the Object Library from the canvas and trying to connect an existing shape to it. Additional connection rules for boundary events are also available (explained later) and applied when for example moving an intermediate event node onto the edge of a task node.

Users can give names to every shape on the drawing canvas. This is done by double-clicking onto the shape as shown below.


The name of a shape can be pulled from the Process Dictionary. If terms are set up in the dictionary, auto-complete can be used for the node names:


Designer also shows three buttons on top of a clicked shape as shown below.


These include:

  • (1) Add To Dictionary - this option allows users to add the name of the task to the Process Dictionary (explained in more details later)

  • (2) Edit Task Form - allows users to create/edit the Task Form. This option is only available for User Tasks

  • (3) View shape sources - shows the BPMN2 for this particular shape only.

The section should get you started with creating simple business process models by dragging/dropping BPMN2 shapes onto the drawing canvas. Next sections will dive deeper into many other aspects of Designer.

The Designer toolbar contains many different functions which can be used during process modelling.


We will now go through each of the buttons in the Designer Toolbar and give a brief overview of what it does.

(1) Save - allows users to save, copy, rename and delete the business process model. In addition users can turn on auto-save which will automatically save the business process within a defined time interval.


(2) Cut - enabled when a portion of the model is selected.

(3) Copy - enabled when a portion of the model is selected.

(4) Paste - paste the copied portion of the model onto the drawing board.

(5) Delete - enabled when there is a portion of the model is selected and removes it.

(6, 7) Undo/Redo - undo the last performed operation on the drawing canvas.

(8) Local History - local history allows continuous storage of your business process onto your browsers internal storage. Stored version of the business process can persist internet outages or browser crashes so your work will not be lost. This feature is disabled by default and must be enabled by users. Once local history has been enabled users are able to view all previously stored snapshots of their business model, clear local history, configure the snapshot interval, or disable local history. Note that local history will only take a snapshot of your business process on the set storing interval if there were some changes done in the model. If at the end of the snapshot interval Designer detects that there were no changes since the last local history save, no new snapshot will be created.


The Local History results screen allows users to select a stored snapshot of the model and view its process image, and restore it back onto their drawing board.


(9) Object positioning - allows users to position one or more nodes in the business. Note that at last one shape must be selected first, otherwise these options are disable. Contains options "Bring to Front", "Bring to back", "Bring forward", and "Bring Backward"

(10) Alignment: enabled when a portion of the model is selected. Includes options "Align Bottom", "Align Middle", "Align Top", "Align Left", "Align Center", "Align Right", and "Align Same Size".

(11, 12) Group and Ungroup - allows grouping and ungrouping of selected shapes on the drawing board.

(13, 14) Locking and Unlocking - allows parts of the business model to be locked and unlocked. Locked parts of the model cannot be edited (visual display and properties are both locked). Locked nodes are displayed in a light blue color. This feature fosters collaboration of process modelling by allowing users to set parts of their model as "completed" and preventing any further changes to that portion. Other parts of the model can continue to be edited.


(15, 16) Add/Remove Docker - this allows users to add or remove Dockers, or edge points, to sequence flows in the model. Enables when a sequence flow (connector) is selected. It allows users to create very customized connection points from one shape to another. Users can add and remove as many dockers as they would like on a single sequence flow.


(17) Color Themes - Colors are a big part or process modelling as they help with expressing intent as well as help allowing visually impaired users to better view the model. Designer provides two default color themes out of the box named "jBPM" and "High Contrast". The jBPM theme is the default theme used for all new business processes created. Users can switch color themes and the changes will be applied to all nodes that are currently on the model, as well as any new shapes added. Users have the ability to add new custom color themes by adding their own definitions in the Designer themes.json file. Color theme selection is persisted over browser close or possible crash/internet loss.



(18) Process and Task forms - here users have the ability to generate/edit process and task forms. When no user task is selected the default enabled options are "Edit Process Form" and "Generate all Forms". Generate all forms will apply the current model information such as process variables, data objects, and the user tasks data input/output parameters and associations to generate default executable input forms. Upon editing a process and task form, users have the choice between two form editors, the jBPM Form Modeler, and the Designer in-line meta editor. The Designer meta editor is targeted more to technical users as it is text based with the ability for live preview. When the user selects an user task in the model, the "Edit Task Form" and "Generate Task Form" options are enabled which allow users to edit the particular task form, or choose to apply the same generation logic to create a task form for the selected task only. Users have the ability to extend the default form generation templates in designer to create fully customized templates. Node that in the case of the Designer meta editor for forms, generating forms will overwrite existing forms for the process and user tasks. In the case of Form Modeler form generation, a merging algorithm is applied when generating.


When selecting a task, users have the ability to edit the selected tasks form via the form button shown above the user task node.


When editing forms, users are asked to choose between the Form Modeler and the Designer in-line meta editor. If the user selects Form Modeler the form is shown in a new asset tab separately from Designer. Designer meta editor is in-line and part of the Designer application.


The Designer in-line meta form editor is a powerful text-based editor with a live preview feature as well as auto-completion on process variables and user task data inputs/outputs.


(19) Process Information Sharing - this section includes many functions that help with sharing information of your model. These include:


(20) Extra tooling - this section allows users to import their existing BPMN2 processes into designer as well as be able to migrate their old jPDL based processes to BPMN2. For BPMN2 or JSON imports users can choose to add the import ontop of the existing model on the drawing board or choose to replace the current one with the import.




(21) Visual Validation - Designer includes over 100 validation checks and this list is growing. It allows users to view validation issues in real-time as they are modelling their business process. Users can enable visual validation, disable it, as well as view all validation issues at once. If Visual Validation is turned on, Designer with set the shape border of shapes that do not pass validation to red color. Users can then click on that particular shape to view the validation issues for that particular shape only. Alternatively "View All Issues" present a combined list of all validation errors currently found. Note that you do not have to periodically save your business process in order for validation to update. It will do so on its own short intervals during modelling. Users can extend the list of validation issues to include their own types of validation on certain elements of their business model.





(22) Process Simulation - Business Process Simulation deals with statistical analysis of process models over time. It's main goals include

Designer includes a powerful simulation engine which is based on jBPM and Drools and a graphical user interface to view and interpret simulation results. In addition users are able to view all process paths included in their current model on the drawing board. Designer Process Simulation is based on the BPSim 1.0 specification. Details of Process Simulation capabilities in Designer are can be found in its Simulation documentation chapter. Here we just give a brief overview of all features it contains.


When selecting Process Paths, the simulation engine find all possible paths in the business model. Users can choose certain found paths and choose to display them. The chosen path is marked with given colors as shown below.


When selecting "Run Simulation", users have to enter in simulation runtime properties. These include the number of instances of this business process to simulate and the interval time and units. This interval is the time in-between consecutive simulation.


Each shape on the drawing board includes Simulation properties (properties panel) where users can set numerous simulation properties for that particular shape. More info on each of these properties can be found in the Simulation chapter of the documentation. Designer pre-sets some defaults for new processes, which allows business processes to be simulated by default without any modifications of these properties. Note however that the results of the default settings may not be optimal or targeted for the users particular needs.


Once the simulation runtime has completed, users are shown the simulation results in the "Simulation Results" tab of Designer. The results default to the process results. Users can switch to results for each particular shape in their business process to see more specific detauls. In addition, the results contain process paths simulation results for each path in the business process.


Designer simulation presents the users with many different chart types. These include:

The below image shows a number of possible chart types users can view after process simulation has completed.


In addition to the chart results, Designer simulation also offers a full timeline display that includes all details of what happened during simulation. This timeline allows users to navigate through each event that happened during process simulation and select a particular node to display results at that particular point in time.


The simulation timeline can be switched to the Model view. This view displays the process model with the currently selected node in the timeline highlighted. The highlighted node displays the simulation results at that particular point in time of the simulation.


Path execution results shows a chart displaying the chosen path as well as path instance execution details.


(23) Service Repository - this feature allows users to connect to an existing service tasks repository to install service tasks into their list of available shapes. Mode default of this can be found in the Service Repository chapter of the documentation. Users have to enter the URL to the existing service repository and then can install the available service nodes by double-clicking on a particular results row.


(24) Full screen Modev - allows users to place the drawing board of Designer into full-screen mode. This can help with better visualizing larger business processes without having to scroll. Note that this feature is possible only if your browser has full screen mode capabilities. If it does not designer will show a message stating this to the user.


(25) Process Dictionary - Designer Dictionary Editor allows users to create their own dictionary entries or harvest from process documentation or business requirement documents. Process Dictionary entries can be used as auto-completion for shape names. This will be expanded in the future versions to allow mapping of node patters to specific dictionary entries as well. Users can add entries to the dictionary in the Dictioanry Editor or from the selected shapes directly.



(26, 27, 28, 29) Zooming - zooming allows users to zoom in/out of the model, zoom in/out back to the original setting as well as zoom the process model on the drawing board to fit the currently dimensions of the drawing board.

This chapter intends to describe in a simple ways all the steps required to create a process with human tasks, generate and modify the forms for these tasks and execute them. It will provide initial guidance to perform all initial steps, but it will not provide a full description of all available features.

Given that forms are going to be used in tasks, it's possible to generate forms automatically from process variables and task definitions. These forms can be later be modified by using the form editor. In runtime, forms will receive data from process variables, display it to the user and capture his input, and then finally updating process variables again with the new values.

The following example will show all the steps to follow to create a form for the 'Create order' task in the process below.


This form must look like the following in execution:


Once the forms have been generated, you can start editing them. There are several artifacts that are generated in the previous process, but also can be created manually.

We can change the way the form is displayed to the user in the task list. Next, we will show different levels of customization that will allow change it

Each field can be configured to enhance performance in the form. There are a group of common properties, that we call ‘Generic field properties’ and a group of specific properties that depends on the field type.

Let's explain the specific properties of each field type:

  • Short Text (java.lang.String)

  • Long Text (java.lang.String)

    • Compatible field type: Long text, E-mail, Rich text

    • Specific properties

      • Size: input text length.

      • MaxLength: Maximum number of characters allowed.

      • Required: Indicates if it’s mandatory to fill this field.

      • Height: The number or rows to show at text area.

      • Formula. to enter expressions that will be evaluated to set the field value. These expressions are described in Formula & expression section .

      • Range value. A range formula allows you to let you specify the values that the user can select from an specific field. These expressions are described in Formula & expression section

      • Pattern. Allow introduce an expression to specify the validation of the field. In case that the field value introduced hasn’t match the expression, and error is thrown and the error message has to be shown.

      • Default Value formula. Expression to set the field default value.

  • Float (java.lang.Float)

    • Specific properties

      • Size: input text length.

      • MaxLength: Maximum number of characters allowed.

      • Required: Indicates if it’s mandatory to fill this field.

      • Formula. to enter expressions that will be evaluated to set the field value. These expressions are described in Formula & expression section .

      • Range value. A range formula allows you to let you specify the values that the user can select from an specific field. These expressions are described in Formula & expression section

      • Pattern. Allow introduce an expression to specify how the Float value has to be displayed. The pattern allowed is show in section pattern in http://docs.oracle.com/javase/6/docs/api/java/text/DecimalFormat.html

      • Default Value formula. Expression to set the field default value.

  • Decimal (java.lang.Double)

    • Specific properties

      • Size: input text length.

      • MaxLength: Maximum number of characters allowed.

      • Required: Indicates if it’s mandatory to fill this field.

      • Formula. Used to enter expressions that will be evaluated to set the field value. These expressions are described in Formula & expression section.

      • Range value. A range formula allows you to let you specify the values that the user can select from an specific field. These expressions are described in Formula & expression section .

      • Pattern. Allow introduce an expression to specify how the Double value has to be displayed. The pattern allowed is show in section pattern in http://docs.oracle.com/javase/6/docs/api/java/text/DecimalFormat.html

      • Default Value formula. Expression to set the field default value.

  • BigDecimal (java.math.BigDecimal)

    • Specific properties

      • Size: input text length.

      • MaxLength: Maximum number of characters allowed.

      • Required: Indicates if it’s mandatory to fill this field.

      • Formula. Used to enter expressions that will be evaluated to set the field value. These expressions are described in Formula & expression section.

      • Range value. A range formula allows you to let you specify the values that the user can select from an specific field. These expressions are described in Formula & expression section .

      • Pattern. Allow introduce an expression to specify how the BigDecimal value has to be displayed. The pattern allowed is show in section pattern in http://docs.oracle.com/javase/6/docs/api/java/text/DecimalFormat.html

      • Default Value formula. Expression to set the field default value.

  • Big integer (java.math.BigInteger)

    • Specific properties

      • Size: input text length.

      • MaxLength: Maximum number of characters allowed.

      • Required: Indicates if it’s mandatory to fill this field.

      • Formula. Used to enter expressions that will be evaluated to set the field value. These expressions are described in Formula & expression section.

      • Range value. A range formula allows you to let you specify the values that the user can select from an specific field. These expressions are described in Formula & expression section .

      • Default Value formula. Expression to set the field default value.

  • Short (java.lang.Short)

    • Specific properties

      • Size: input text length.

      • MaxLength: Maximum number of characters allowed.

      • Required: Indicates if it’s mandatory to fill this field.

      • Formula. Used to enter expressions that will be evaluated to set the field value. These expressions are described in Formula & expression section.

      • Range value. A range formula allows you to let you specify the values that the user can select from an specific field. These expressions are described in Formula & expression section .

      • Default Value formula. Expression to set the field default value.

  • Integer (java.lang.Integer)

    • Specific properties

      • Size: input text length.

      • MaxLength: Maximum number of characters allowed.

      • Required: Indicates if it’s mandatory to fill this field.

      • Formula. Used to enter expressions that will be evaluated to set the field value. These expressions are described in Formula & expression section.

      • Range value. A range formula allows you to let you specify the values that the user can select from an specific field. These expressions are described in Formula & expression section .

      • Default Value formula. Expression to set the field default value.

  • Long Integer (java.lang.Long)

    • Specific properties

      • Size: input text length.

      • MaxLength: Maximum number of characters allowed.

      • Required: Indicates if it’s mandatory to fill this field.

      • Formula. Used to enter expressions that will be evaluated to set the field value. These expressions are described in Formula & expression section.

      • Range value. A range formula allows you to let you specify the values that the user can select from an specific field. These expressions are described in Formula & expression section .

      • Default Value formula. Expression to set the field default value.

  • E-mail (java.lang.String)

    • Compatible field type: Short text, Long text, Rich text

    • Specific properties

      • Size: input text length.

      • MaxLength: Maximum number of characters allowed.

      • Required: Indicates if it’s mandatory to fill this field.

      • Default Value formula. Expression to set the field default value.

  • Checkbox (java.lang.Boolean)

    • Specific properties

      • Required: Indicates if it’s mandatory to fill this field.

      • Default Value formula. Expression to set the field default value.

  • Rich text: (java.lang.String)

    • Compatible field type: Short text, Long text, E-mail

    • Specific properties

      • Size: input text length.

      • MaxLength: Maximum number of characters allowed.

      • Required: Indicates if it’s mandatory to fill this field.

      • Height: The number or rows to show at text area.

      • Default Value formula. Expression to set the field default value.

  • Timestamp (java.util.Date)

    • Compatible field type: Short date

    • Specific properties

      • Size: input text length.

      • Required: Indicates if it’s mandatory to fill this field.

      • Formula. to enter expressions that will be evaluated to set the field value. These expressions are described in Formula & expression section .

      • Default Value formula. Expression to set the field default value.

  • Short date (java.util.Date)

    • Compatible field type: Timestamp

    • Specific properties

      • Size: input text length.

      • Required: Indicates if it’s mandatory to fill this field.

      • Formula. to enter expressions that will be evaluated to set the field value. These expressions are described in Formula & expression section .

      • Default Value formula. Expression to set the field default value.

  • Document (org.jbpm.document.Document)

    • Specific properties

      • Required: Indicates if it’s mandatory to fill this field.

  • Simple subform (Object)

    • For more details see sectionSimple Object (Subform field Type).

      Specific properties

      • Default form. Show the list of available forms to select what one will be displayed to show the object.

  • Multiple subform (Multiple Object)

    • For more details see sectionArrays of objects.( Multiple subform field Type).

      Specific properties

      • Default form. Show the list of available forms to select what one will be displayed to show the object when no other form is configured with an specific purpose.

      • Preview form. If a form is specified, it will be used to show the item details

      • Table form. If a form is specified, it will be used to show the table columns when the item list is showed

      • New item text. Text to show at New Item button

      • Add item text. Text to show at Add Item button

      • Cancel text. Text to show at Cancel button

      • Allow remove Items. If this check is selected, the form allow remove items in table view.

      • Allow edit items. If this check is selected, the form allow edit items in table view.

      • Allow preview items. If this check is selected, the form allow preview items in table view.

      • Hide creation button. Check to not show the creation button

      • Expanded. If is checked, when a new item is being added, the field display the table with the existing items and the creation form at same time

      • Allow data enter in table mode. Allow modify data in table view directly.

There are two types of complex fields: fields representing an object, and fields representing an object array.

Once the field is added to the form, either automatically or manually, it must be configured so that the form had to know how to display the objects that will contain in execution time.

Next we describe how can be the configuration process:

Once the form to represent the object, the parent form has to be configured to use them in the parent Subform or Multiple subform.

Below we will describe how the setup would be:

Form Modeler provides a Formula Engine that you can use to automatically calculate field values. That Formula engine supports Java and XPATH expressions to access the form fields values. Let’s see some examples.

There are three types of field types that you can use to model your form:

Is possible to extend the platform to add Custom Field Types that make a specific field (of any type) on the form to look and behave totally different than the standard platform fields. On this section we will take a look on how to create them and how to configure them.

Basically a Custom Field Type is a Java class that implements the org.jbpm.formModeler.core.fieldTypes.CustomFieldType interface and is packaged inside inside a JAR file that is placed on the Application Server classpath or inside the application WAR.

Lets take a look atorg.jbpm.formModeler.core.fieldTypes.CustomFieldType:


package org.jbpm.formModeler.core.fieldTypes;

import java.util.Locale;
import java.util.Map;

/**
* Definition interface for custom fields
*/
public interface CustomFieldType {
    /**
    * This method returns a text definition for the custom type. This text will be shown
    * on the UI to identify the CustomFieldType
    * @param locale The current user locale
    * @return A String that describes the field type on the specified locale.
    */
    public String getDescription(Locale locale);

    /**
    * This method returns a string that contains the HTML code that will be used to show
    * the field value on screen
    * @param value The current field value
    * @param fieldName The field name
    * @param namespace The unique id for the rendered form, it should be used to generate
    * identifiers inside the HTML code.
    * @param required Determines if the field is required or not
    * @param readonly Determines if the field must be shown on read only mode
    * @param params A list of configuration params that can be set on the field
    * configuration screen
    * @return The HTML that will be used to show the field value
    */
    public String getShowHTML(Object value, String fieldName, String namespace,
              boolean required, boolean readonly, String... params);

    /**
    * This method returns a String that contains the HTML code that will show the input
    * view of the field. That will be used to set the field value.
    * @param value The current field value
    * @param fieldName The field name
    * @param namespace The unique id for the rendered form, it should be used to
    * generate identifiers inside the HTML code.
    * @param required Determines if the field is required or not
    * @param readonly Determines if the field must be shown on read only mode
    * @param params A list of configuration params that can be set on the field
    * configuration screen
    * @return The HTML code that will be used to show the input view of the field.
    */
    public String getInputHTML(Object value, String fieldName, String namespace,
              boolean required, boolean readonly, String... params);

    /**
    * This method is used to obtain the field value from the submitted values.
    * @param requestParameters A Map containing the request parameters for the
    * submitted form
    * @param requestFiles A Map containing the java.io.Files uploaded on the request
    * @param fieldName The field name
    * @param namespace The unique id for the rendered form, it should be used to generate
    * identifiers inside the HTML code.
    * @param previousValue The previous value of the current field
    * @param required Determines if the field is required or not
    * @param readonly Determines if the field must be shown on read only mode
    * @param params A list of configuration params that can be set on the field
    * configuration screen
    * @return The value of the field based on the submitted form values.
    */
    public Object getValue(Map requestParameters, Map requestFiles, String fieldName,
              String namespace, Object previousValue, boolean required, boolean readonly,
              String... params);
}
  

As you can see this Interface defines the methods that determines how the field has to be shown on the screen for when the form is shown on insert(getInputHTML(...)) or readonly (getShowHTML(...)) mode. It also provides the method (getValue(...)) that reads the needed parameters from the request and to obtain the correct field value. Te returned value type must match with the type of the field added on the form.

Now let's see how to use and configure and use a Custom Field type. Following the example on the previous chapter, we have created a File Input type and we have it already installed on our application. So now we are going to create a new form and add a Short Text property and turn it into a File Input and edit the field properties changing the Field Type from Short text toCustom field.


After changing the field type a new set of properties will appear:



So opening the Custom field select box we'll be able to select the File Input from the available custom types:


After selecting the File Input type on the list and saving the field properties the form will look like:


If we build a simple process and configure a Short text to be shown as the sampleFile Input, if we build the project on runtime the field will behave uploading the chosen files to the server and allowing the user to download it like this:



If we take a look at what's the process variable value, we'll see that is storing a String with the file path stored in server.


On this section we are going to describe step by step how to attach documents to your process variables from your forms and how you can configure to store the uploaded documents anywhere (File System, Data Base, Alfresco...) using the Pluggable Variable Persistence.

In order to store the document using the Pluggable Variable Persistence you'll have to define your Marshalling Strategy to manage the uploaded Documents. To start create a Maven project with your favourite IDE and add the following dependencies:


<dependency>
  <groupId>org.kie</groupId>
  <artifactId>kie-api</artifactId>
  <version>{version}</version>
</dependency>
<dependency>
  <groupId>org.jbpm</groupId>
  <artifactId>jbpm-document</artifactId>
  <version>{version}</version>
</dependency>
    

Once you did that is time to create your Document Marshalling Strategy, to do so you just have to create a class that extends:


package org.jbpm.document.marshalling;

public abstract class AbstractDocumentMarshallingStrategy implements ObjectMarshallingStrategy {

  public abstract Document buildDocument( String name, long size, Date lastModified, Map<String, String> params );

  public void write( ObjectOutputStream os, Object object )
        throws IOException;

  public Object read( ObjectInputStream os )
        throws IOException, ClassNotFoundException;

  public byte[] marshal( Context context, ObjectOutputStream os, Object object )
        throws IOException;

  public Object unmarshal( Context context, ObjectInputStream is, byte[] object,
        ClassLoader classloader ) throws IOException, ClassNotFoundException;

  public Context createContext();
}
    

The methods to implement are:

You can see how the default DocumentMarshallingStrategy is implemented looking at this link.

After creating your Document Marshalling Strategy and add it to your server classpath the only thing remaining is to configure your project deployment descriptor add it to the marshalling strategies list. To do it you just have to open the Kie-Workbench on your browser, open your project on the Authoring view and edit the kie-deployment-descriptor.xml file located on <yourproject>/src/main/resources/META-INF and add your Document Marshalling Strategy to the <marshalling-strategies> list like this:


<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<deployment-descriptor
    xsi:schemaLocation="http://www.jboss.org/jbpm deployment-descriptor.xsd"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <persistence-unit>org.jbpm.domain</persistence-unit>
  <audit-persistence-unit>org.jbpm.domain</audit-persistence-unit>
  <audit-mode>JPA</audit-mode>
  <persistence-mode>JPA</persistence-mode>
  <runtime-strategy>SINGLETON</runtime-strategy>
  <marshalling-strategies>
    <marshalling-strategy>
      <resolver>reflection</resolver>
      <identifier>
        org.jbpm.document.marshalling.DocumentMarshallingStrategy
      </identifier>
    </marshalling-strategy>
  </marshalling-strategies>
  <event-listeners/>
  <task-event-listeners/>
  <globals/>
  <work-item-handlers/>
  <environment-entries/>
  <configurations/>
  <required-roles/>
</deployment-descriptor>
    

Since this is done you're able to build your project and upload your documents on your process.



Note

On this example we are configuring the default DocumentMarshallingStrategy, please use it for test and demo purposes.

This chapter intends to describe how you can embed process forms and interact with them on another webapp including the new Javascript API provided by the platform.

You can find the library inside the kie-wb-*.war on the js file located on js/jbpm-forms-rest-integration.js.

This JavaScript API tries to be a simple mechanism to use forms on remote applications allowing to load the forms from different KIE Workbench instances, submit them, launch processes/tasks and execute callback functions when the actions are done.

The basic methods are:

showStartProcessForm(hostUrl, deploymentId, processId, divId, onsuccessCallback, onerrorCallback): Makes a call to the REST endpoint to obtain the form URL and if it gets a valid response will embed the process start form on the given div. The parameteres needed are:
startProcess(divId, onsuccessCallback, onerrorCallback): Submits the form loaded on the given div and starts the process. The parameteres needed are:
showTaskForm(hostUrl, taskId, divId, onsuccessCallback, onerrorCallback): Makes a call to the REST endpoint to obtain the form URL and if it gets a valid response will embed the task form on the given div. The parameteres needed are:
claimTask(divId, onsuccessCallback, onerrorCallback): Claims the task whose form is being rendered
startTask(divId, onsuccessCallback, onerrorCallback): Starts the task whose form is being rendered
releaseTask(divId, onsuccessCallback, onerrorCallback): Releases the task whose form is being rendered
saveTask(divId, onsuccessCallback, onerrorCallback): Submits the form and saves the state of the task whose form is being rendered
completeTask(divId, onsuccessCallback, onerrorCallback): Submits the form and completes task whose form is being rendered
clearContainer(divId): Cleans the div content and the related data stored on the component.

Now let's see an example how you can use the library to load the HR process form and start a new process instance. We are going to define a HTML page that will contain very simple components:

First we are look at the HTML code:


<head>
  <script src="js/jbpm-forms-rest-integration.js"></script>
  <script>
      var formsAPI = new jBPMFormsAPI();
  </script>
</head>
<body>
  <input type="button" id="showformButton"
      value="Show Process Form" onclick="showProcessForm()">
  <p/>
  <div id="myform" style="border: solid black 1px; width: 500px; height: 200px;">
  </div>
  <p/>
  <input type="button" id="startprocessButton"
      style="display: none;" value="Start Process" onclick="startProcess()">
</body>
    

Notice that in first place we have added the js library and created an instance of the jBPMFormsAPI object that will manage the form rendering.

Now let's see how the showProcessForm() function looks like:


function showProcessForm() {
  var onsuccessCallback = function(response) {
    document.getElementById("showformButton").style.display = "none";
    document.getElementById("startprocessButton").style.display = "block";
  }

  var onerrorCallback = function(errorMessage) {
    alert("Unable to load the form, something wrong happened: " + errorMessage);
    formsAPI.clearContainer("myform");
  }
  formsAPI.showStartProcessForm("http://localhost:8080/kie-wb/", "org.jbpm:HR:1.0", "hiring", "myform", onsuccessCallback, onerrorCallback);
}
    

As you can see, first we are defining the callback functions:

Once we defined the callback function we proceed to call the formsAPI.showStartProcessForm(...) that is going make the REST call and embedd the form inside the specified div. Notice that we are providing a bunch of information in order to load the form, the URL where the KIE-Workbench is running (in this example "http://localhost:8080/kie-wb/"), the deployment where the process is located ("org.jbpm:HR:1.0"), the process id ("hiring"), the DIV id that is going to contain the form ("myform") and the callback functions (onsuccessCallback and onerrorCallback).

Now let's take a look at the startProcess() that is the one that is going to submit the form and start the process:


function startProcess() {
  var onsuccessCallback = function(response) {
    document.getElementById("showformButton").style.display = "block";
    document.getElementById("startprocessButton").style.display = "none";
    formsAPI.clearContainer("myform");
    alert(response);
  }

  var onerrorCallback = function(response) {
    document.getElementById("showformButton").style.display = "block";
    document.getElementById("startprocessButton").style.display = "none";
    formsAPI.clearContainer("myform");
    alert("Unable to start the process, something wrong happened: " + response);
  }
  formsAPI.startProcess("myform", onsuccessCallback, onerrorCallback);
}
    

As showProcessForm(), first we are defining the callback functions. Both are doing basically the same:

Once that is done we just do the call to the formsAPI.startProcess(...) that will send a message to the component that renders the form inside the "myform" DIV and will exectue the callback functions when the action is done. Notice that we don't need the provide any other information than the DIV that contains the form and optionally the callback functions.

With a simple code like this you'll be able to run process/task forms that are located on different Kie-Workbench instances from any other application.




In version 5.x processes were stored in so called packages produced by Guvnor and next downloaded by jbpm console for execution using KnowledgeAgent. Alternatively one could drop their process files (bpmn2 files) into a predefined directory that was scanned on the jbpm console start. That was it. That enforces users to always use Guvnor when dynamic deployment was needed. Although there is nothing wrong with it, actually that was recommended approach but not everytime it was desired.

Version 6, on the other hand moves away from proprietary packages in favor of, well known and mature, Apache Maven based packaging - known as knowledge archives - kjar. Processes, rules etc (aka business assets) are now part of a simple jar file built and managed by Maven. Along the business assets, java classes and other file types are stored in the jar file too. Moreover, as any other maven artifact, kjar can have defined dependencies on other artifacts including other kjars. What makes the kjar special when compared with regular jars is a single descriptor file kept inside META-INF directory of the kjar - kmodule.xml. That descriptor allows to define:

By default, this descriptor is empty (just kmodule root element) and is considered as marker file. Whenever a runtime component (such as jbpm console) is about to process kjar it looks up kmodule.xml to build its runtime representation. In addition to kmodule.xml a deployment descriptor (that provides fine graind control over deployment) is available (since 6.1).

While kmodule is mainly targeting on knowledge base and knowledge session basic configuration, deployment descriptors are considered more technical configuration. Following are the items available for configuration via deployment descriptors:

Deployment descriptor is an xml file that is placed inside META-INF folder of the kjar, although it is an optional file and deployments will succeed even when such descriptor is missing.


<deployment-descriptor xsi:schemaLocation="http://www.jboss.org/jbpm deployment-descriptor.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <persistence-unit>org.jbpm.domain</persistence-unit>
    <audit-persistence-unit>org.jbpm.domain</audit-persistence-unit>
    <audit-mode>JPA</audit-mode>
    <persistence-mode>JPA</persistence-mode>
    <runtime-strategy>PER_PROCESS_INSTANCE</runtime-strategy>
    <marshalling-strategies/>
    <event-listeners/>
    <task-event-listeners/>
    <globals/>
    <work-item-handlers/>
    <environment-entries/>
    <configurations/>
    <required-roles/>
    <remoteable-classes/>
    <limit-serialization-classes/>
</deployment-descriptor>

It provides more configuration options then the standard deployment has. Deployment descriptors are used in hierarchical way meaning they can be placed on various levels of the system and merged on runtime. jBPM supports following levels of deployment descriptors:

Deployment descriptors on different levels are merged on deployment time where the master is considered descriptor lower in the hierarchy and slave one that is higher in hierarchy. To give an example, when a kjar is deployed and it contains deployment descriptor kjar's deployment descriptor is considered slave and server level descriptor is considered master. With default merge mode it will override all master entries with slave ones as long as they are not empty and combine all collections.

Since kjar can have dependencies to other kjars, and in turn that dependencies might have deployment descriptors as well, they will be placed in deployment descriptors hierarchy lower than the actual kjar that is being deployed. With that said, this is how it will look like from hierarchy point of view, starting with master (server level):

That in default merging mode will result in deployment descriptor where with non empty values from kjar's deployment descriptors and merged collection from all levels.

So far all merging was done with default mode, which is MERGE_COLLECTIONS but that's not the only mode that is available:

Default deployment descriptor

There is always default deployment descriptor available, even if it was not explicitly configured, when running in jbpm-console (kie-workbench) the default values are as follows:

Default deployment descriptor can be altered by specifying valid URL location to an xml file that will provide fully defined deployment descriptor. By fully defined we mean that all elements should be specified as this deployment descriptor will become server level deployment descriptor.

-Dorg.kie.deployment.desc.location=file:/my/custom/location/deployment-descriptor.xml

Collection configuration items

Deployment descriptor consists of collection based items (event listeners, work item handlers, globals, etc) that usually require definition of an object that should be created on runtime. There are two types of collection based configuration items:

Object model consits of:


Depending on resolver type, creation or look up of the object will be performed. The default (and easiest) is reflection that will use both parameters and identifier (in this case is FQCN) to construct the object. Parameters in this case can be String or another object model for representing other types than String. Following is an example of an object model that will create an instance of org.jbpm.test.CustomStrategy using reflection resolver that will use constructor of that class with two String parameters. Note that String paramaters are created with different ways (using object model - first param, directly by giving String - second param).


Same can be done by using DeploymentDescriptor fluent API:

// create instance of DeploymentDescriptor with default persistence unit name
DeploymentDescriptor descriptor = new DeploymentDescriptorImpl("org.jbpm.domain");

// get builder and modify the descriptor		
descriptor.getBuilder()
.addMarshalingStrategy(new ObjectModel("org.jbpm.testCustomStrategy", 
			new Object[]{
			new ObjectModel("java.lang.String", new Object[]{"param1"}),
			"param2"}));

Reflection based object model resolver is the most verbose in case there are parameters involved but there are few parameters that are available out of the box and do not need to be created, they are simply referenced by name:

So to be able to use one of these it's enough to reference them by name and make sure that proper object type is used within your class:

...
<marshalling-strategy>
  <resolver>reflection</resolver>
  <identifier>org.jbpm.test.CustomStrategy</identifier>
  <parameters>
     <parameter xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema">runtimeManager</parameter>
   </parameters>
</marshalling-strategy>
...

In case reflection based resolver is not enough, more advanced resolver can be used that utilizes power of MVEL language. It's much easier in the configuration as it expects mvel expression as identifier of the object model. It will provide the out of the box parameters (listed above: runtime manager, kie session, etc) into the mvel context while evaluating expression. To define object model with mvel resolver use following xml (that will be equivalent to replection based above):

...
<marshalling-strategy>
  <resolver>mvel</resolver>
  <identifier>new org.jbpm.test.CustomStrategy(runtimeManager)</identifier>
</marshalling-strategy>
...

Last but not least, there is Spring based resolver available as well that allows to simply look up a bean by its identifier from spring application context. This resolver is not used in jbpm console (kie-workbench) as it does not use spring but whenever jBPM is used together with Spring it might become handy when deploying kjars into the runtime. It's very simple definition in xml, again equivalent to the other one assuming org.jbpm.test.CustomStrategy is registered in spring application context under customStrategy id.

...
<marshalling-strategy>
  <resolver>spring</resolver>
  <identifier>customStrategy</identifier>
</marshalling-strategy>
...

Manage deployment descriptor

Deployment descriptor is created as soon as project is created. It does contins the most basic deployment descriptor that is based on the default one. Meaning all settings present in default deployment descriptor will be copied into the one placed in the project. Further changes can be done directly in the xml content (in next versions more user friendly editor will most likely be provided). It is accessible from Administration perspecitve as this is considered technical administration task rather than business related activity.

Restrict access to runtime engine

jbpm console (kie-workbench) provides access restriction to repositories that can be configured with supplementary tool called kie-config-cli. This protects repositories in the authoring perspsective based on roles membership. Deployment descriptors moves this capability to the runtime engine by ensuring that access to processes will be granted only to users that belong to groups defined in the deployment descriptor as required roles. By default when project is created (at the same time deployment descriptor is created as well) required roles are automatically filled in based on repository restrictions. These roles can be still altered by editing deployment descriptor via Administration perspective as presented in Manage deployment descriptor section.

Security is enforced on two levels:

Required roles are defined as simple strings that should match actual roles defined in security realm. Following is a xml snippet that shows definition of required roles in deployment descriptor:

<deployment-descriptor>
...
    <required-roles>
        <required-role>experts</required-role>
    </required-roles>
...
</deployment-descriptor>

In case fine grained control is required defined roles can be prefixed with one of the following to control it on further level:

For example to restrict access to show process from given kjar only to group 'management' but still allow them to be executed by anyone (sort of system processes) one could define it as follows:


<deployment-descriptor>
...
   <required-roles>
      <required-role>view:management</required-role>
   </required-roles>
...
</deployment-descriptor>

Classes used for serialization in the remote services

When processes make use of custom types (or in general non promitive types) and there is a use case to include remote api invocations (REST, SOAP, JMS) such types must be available to the remote services marshalling mechanism that is based on JAXB for XML type. By default all types defined in kjar will be automatically included in JAXB context and therefore will be avialble for remote interaction. Though there might be more classes (like from dependent model) that shall be included there too.

Upon deployment, jBPM will scan classpath of given kjar to automatically register classes that might be needed for remote interaction. This is done based on following rules:

If that is not enough deployment descriptor allows to manually specify classes that shall be added to the JAXB context via remoteable-classes element:

<remoteable-classes>
   ...
   <remotable-class>org.jbpm.test.CustomClass</remotable-class>
   <remotable-class>org.jbpm.test.AnotherCustomClass</remotable-class>
   ...
</remoteable-classes>

With this all classes can be added to the JAXB context to properly marshal and unmarshal data types when interacting with jBPM remotely.

Limiting classes usd for serialization in the remote services

When there are classes in the kjar project or in the dependencies of the kjar project that woudl cause problems when used for serialization, the limit-serialization-classes property can be used to limit which classes are used for serialization

<limit-serialization-classes>true</limit-serialization-classes>

This property limits classes used for serialization to classes which fulfill both of the following "location" and "annotation" criteria:

Classes that:

These classes must also be annotated with one of the following type annotations:

Additionally, classes will be excluded if they are any of the following: interfaces, local classes, member classes or anonymous classes.

This chapter describes the screens related with the creation and management of process definitions and process instances.

Once you have modelled, configured all the technical details and build and deployed your projects containing your business processes you should be able to see all the available process definitions in the Process Definition List. For all the process definitions listed in the Process Definitions List you will be able to inspect the Process Definition details and start as many Process Instances as needed. The following sections describes the features provided by all the screens in charge of the manipulation of process definitions and process instances. You can find these screens under the Process Management Menu, in the jBPM Console NG or in KIE Workbench.

You can find the source code related with the process definition and instances manupilation inside this module: http://github.com/droolsjbpm/jbpm-console-ng/tree/master/jbpm-console-ng-process-runtime Feel free to report issues, send Pull Requests and get in contact with the team via comments in github.

The process definition section is composed by two main screens: the Process Definition Lists and the Process Definition Details.

The process instances section is composed by two main screens: the Process Instance Lists and the Process Instance Details. In this case the Process Instance Details provides several tabs with the runtime information related with the process.

This list works with the concept of view. A view is a set of visualization parameters that modify what items has to be displayed and how the items details has to be shown.

A view embrace

We find here different areas with different purposes:Filtering, general section configuration and specific view parameter setting in the data grid presentation:

This chapter introduces the Task Management screens and the its integration with the Form Modeller component to allow users to work on their assigned tasks. You can find the source code of these screens here: https://github.com/droolsjbpm/jbpm-console-ng/tree/master/jbpm-console-ng-human-tasks . Feel free to report issues, send Pull Requests and get in contact with the team via comments in github. At the end of this section you will find a technical description about how to customize these views.

Every user with access to the platform will have access to its personal task list where tasks assigned to him/her will be displayed. Each user will be able to create its own personal tasks or work on tasks that were create as a result of a business process execution.

You can access to the Task List accessing Tasks main menu:

Pending tasks for each user will be displayed in their task list screen. Notice that you will not be able to see assigned tasks from another user different from the one that is currenlty logged in.

The jBPM Process Dashboard is an specific use case of a dashboard feed from data coming from a relational database via SQL queries. In this case, the database tables consumed are: processinstancelog and bamtasksummary both belonging to the jBPM engine.

Every time the jBPM runtime updates the information stored into such tables the data becomes automatically available to the dashboard indicators. The following picture shows the main screen that users get when navigating to the Process & Task Dashboard.


Note

Notice, those are generic metrics not tied to any specific business process. Nonetheless, it's worth to mention that it would be very easy for customers to modify, extend or adapt this generic dashboard for custom needs. A customer could take the jBPM Process Dashboard as the base template for building a custom dashboard which mixes data coming from the jBPM engine plus data coming from its own business domain.

As you can see there exists two tabs in the top of the screen: Processes and Tasks. As their name indicates, every tab contains only indicators related to either processes or tasks.

To filter through the data users can click on the charts in order to select, for instance, a given process, a given status, etc... Every time a filter is applied, all the indicators are automatically updated and synced according to the criteria set. The next picture shows, for instance, what happens when both the process Sales and the status Active are selected.


Using the built-in filter features is a good way to select the process instances the users want to look into. Additionally, at any time, no matter whether there is any active filter or not, users can also navigate to the actual list of instances the dashboard indicators are showing. The Show Instances link at the top right side on the screen can be used to display those instances. Once clicked, the view is switched to the screen shown in the next picture:


From this view, users can sort the instances just by clicking on any column. They can get a detailed view of a particular instance just by clicking on the desired row as well.


The process instance details panel is shown on the right of the screen just after clicking on a row. Notice this is a read only view, just for monitoring purposes. After identifying a target process instance the next step is to use the jBPM Process Instance Console in case the user needs to manage such process instance.

To sum up, the jBPM Process & Task Dashboard let users:

The workbench contains an execution server (for executing processes and tasks), which also allows you to invoke various process and task related operations through a remote API. As a result, you can setup your process engine "as a service" and integrate this into your applications easily by doing remote requests and/or sending the necessary triggers to the execution server whenever necessary (without the need to embed or manage this as part of your application).

Both a REST and JMS based service are available (which you can use directly), and a Java remote client allows you to invoke these operations using the existing KieSession and TaskService interfaces (you also use for local interaction), making remote integration as easy as if you were interacting with a local process engine.

The Remote Java API provides KieSession, TaskService and AuditService interfaces to the JMS and REST APIs.

The interface implementations provided by the Remote Java API take care of the underlying logic needed to communicate with the JMS or REST APIs. In other words, these implementations will allow you to interact with a remote workbench instance (i.e. KIE workbench or the jBPM Console) via known interfaces such as the KieSession or TaskService interface, without having to deal with the underlying transport and serialization details.

The first step in interacting with the remote runtime is to use a RemoteRuntimeEngineFactory static newRestBuilder(), newJmsBuilder() or newCommandWebServiceClientBuilder() to create a builder instance. Use the new builder instance to configure and to create a RuntimeEngine instance to interact with the server.

Each of the REST, JMS or WebService RemoteClientBuilder instances exposes different methods that allow the configuration of properties like the base URL of the REST API, JMS queue locations or timeout while waiting for responses.

The Remote Java API provides clients, not "instances"

While the KieSession, TaskSerivce and AuditService instances provided by the Remote Java API may "look" and "feel" like local instances of the same interfaces, please make sure to remember that these instances are only wrappers around a REST or jMS client that interacts with a remote REST or JMS API.

 

This means that if a requested operation fails on the server, the Remote Java API client instance on the client side will throw a RuntimeException indicating that the REST call failed. This is different from the behaviour of a "real" (or local) instance of a KieSession, TaskSerivce and AuditService instance because the exception the local instances will throw will relate to how the operation failed. Operations on a Remote Java API client instance that would normally throw other exceptions (such as the TaskService.claim(taskId, userId) operation when called by a user who is not a potential owner), will now throw a RuntimeException instead when the requested operation fails on the server.

 

Also, while local instances require different handling (such as having to dispose of a KieSession), client instances provided by the Remote Java API hold no state and thus do not require any special handling.

 

Finally, the instances returned by the client KieSession and TaskService instances (for example, process instances or task summaries) are not the same (internal) objects as used by the core engines. Instead, these returned instances are simple data transfer objects (DTOs) that implement the same interfaces but are designed to only return the associated data. Modifying or casting these returned instances to an internal implementation class will not succeed.

Each builder has a number of different (required or optional) methods to configure a client RuntimeEngine instance.

The following example illustrates how the Remote Java API can be used with the REST API.

public void startProcessAndHandleTaskViaRestRemoteJavaAPI(URL serverRestUrl, String deploymentId, String user, String password) {
        // the serverRestUrl should contain a URL similar to "http://localhost:8080/jbpm-console/"
        
        // Setup the factory class with the necessarry information to communicate with the REST services
        RuntimeEngine engine = RemoteRuntimeEngineFactory.newRestBuilder()
        .addUrl(serverRestUrl)
        .addTimeout(5)
        .addDeploymentId(deploymentId)
        .addUserName(user)
        .addPassword(password)
        // if you're sending custom class parameters, make sure that
        // the remote client instance knows about them!
        .addExtraJaxbClasses(MyType.class)
        .build();
        
        // Create KieSession and TaskService instances and use them
        KieSession ksession = engine.getKieSession();
        TaskService taskService = engine.getTaskService();
        
        // Each operation on a KieSession, TaskService or AuditLogService (client) instance
        // sends a request for the operation to the server side and waits for the response
        // If something goes wrong on the server side, the client will throw an exception.
        Map<String, Object> params = new HashMap<String, Object>();
        params.put("paramName", new MyType("name", 23));
        ProcessInstance processInstance
        = ksession.startProcess("com.burns.reactor.maintenance.cycle", params);
        long procId = processInstance.getId();
        
        String taskUserId = user;
        taskService = engine.getTaskService();
        List<TaskSummary> tasks = taskService.getTasksAssignedAsPotentialOwner(user, "en-UK");
        
        long taskId = -1;
        for (TaskSummary task : tasks) {
        if (task.getProcessInstanceId() == procId) {
        taskId = task.getId();
        }
        }
        
        if (taskId == -1) {
        throw new IllegalStateException("Unable to find task for " + user +
        " in process instance " + procId);
        }
        
        taskService.start(taskId, taskUserId);
        
        // resultData can also just be null
        Map<String, Object> resultData = new HashMap<String, Object>();
        taskService.complete(taskId, taskUserId, resultData);
        }

When configuring the remote JMS client, you must choose one of the following ways to configure the JMS connection:

In addition, if you are doing an operation on a task via the remote JMS client (and are not using the disableTaskSecurity() method), then you must also configure SSL. The following methods (described in more detail below) are available for this:

Each builder has a number of different (required or optional) methods to configure a client RuntimeEngine instance.

Remote JMS Runtime Engine Builder methods

addConnectionFactory(ConnectionFactory connectionFactory) when

Add a ConnectionFactory used to create JMS session to send and receive messages

addDeploymentId(String deploymentId) when

Set the deployment id of the deployment

addExtraJaxbClasses(Class…​ extraJaxbClasses ) when

Add extra classes to the client for when user-defined class instances are passed as parameters to client methods

When passing instances of user-defined classes in a Remote Java API call, it’s important to use this method first to add the classes so that the class instances can be serialized correctly.

addHostName(String hostname) when

Set the host name for the server that the client is making a JMS connection with

addJbossServerHostName(String hostname)

Set the host name of the JBoss EAP server that the client is making a JMS connection with

After using this method, no other configuration is needed with regards to the server. However, additional server parameters (host name, port) may be needed when also configuring SSL.

Make sure that the EAP version-appropriate org.jboss.naming:jboss-naming dependency is available on the classpath when doing this

addJmsConnectorPort(int port) when

Set the port used for the JMS connection connection with the server

addKeystoreLocation(String keystorePath) when

Set the location (path) of the keystore

addKeystorePassword(String keystorePassword) when

Set the password for the keystore

addKieSessionQueue(Queue ksessionQueue) when

Pass the javax.jms.Queue instance representing the KIE.SESSION queue used to receive process instance requests from the client

addPassword(String password) always
Set the password of the user connecting to the server
addProcessInstanceId(long process) when

Set the process instance id of the deployment

addRemoteInitialContext(InitialContext remoteInitialContext)

Set the remote InitialContext instance from the remote application server, which is then used to retrieve the ConnectionFactory and Queue instances

After using this method, no other configuration is needed with regards to the server. However, additional server parameters (host name, port) may be needed when also configuring SSL.

addResponseQueue(Queue responseQueue) when

Pass the javax.jms.Queue instance representing the KIE.RESPONSE queue used to send responses to the client

addTaskServiceQueue(Queue taskServiceQueue) when

Pass the javax.jms.Queue instance representing the KIE.TASK queue used to receive task operation requests to the client

addTimeout(int timeoutInSeconds)
Set the timeout for the JMS message. The default is 5 seconds.
addTruststoreLocation(String truststorePath) when

Set the location (path) of the keystore

addTruststoreLocation(String truststorePassword) when

Set the password for the keystore

addUserName(String userName) always

Set the name of the user connecting to the server

This is also the user whose permissions will be used when doing any task operations

clearJaxbClasses()
Clears the list of (user-defined) Classes that the client instance should know about
disableTaskSecurity()
Enables configuration of a client that can send task-related operation requests without having to use SSL. If this option is not used and a task operation is called, the client will throw an exception.
useKeystoreAsTruststore()
Uses the same file (location) and password for the truststore configuration when configuring SSL.

The following example illustrates how to configure a Remote Java API JMS client using a remote InitialContext instance along with SSL. In this case, the same file is being used as both the client’s keystore (the client’s identifying keys and certificates) and as the client’s truststore (the client’s list of trusted certificates from other parties, in this case, the server).

public void startProcessAndHandleTaskViaJmsRemoteJavaAPISsl(String hostNameOrIpAdress, int jmsSslConnectorPort,
        String deploymentId, String user, String password,
        String keyTrustStoreLocation, String keyTrustStorePassword,
        String processId) {
        
        InitialContext remoteInitialContext = getRemoteInitialContext();
        
        RuntimeEngine engine = RemoteRuntimeEngineFactory.newJmsBuilder()
        .addDeploymentId(deploymentId)
        .addRemoteInitialContext(remoteInitialContext)
        .addUserName(user)
        .addPassword(password)
        .addHostName(hostNameOrIpAdress)
        .addJmsConnectorPort(jmsSslConnectorPort)
        .useKeystoreAsTruststore()
        .addKeystoreLocation(keyTrustStoreLocation)
        .addKeystorePassword(keyTrustStorePassword)
        .build();
        
        // create JMS request
        KieSession ksession = engine.getKieSession();
        ProcessInstance processInstance = ksession.startProcess(processId);
        long procInstId = processInstance.getId();
        logger.debug("Started process instance: " + procInstId );
        
        TaskService taskService = engine.getTaskService();
        List<TaskSummary> taskSumList
        = taskService.getTasksAssignedAsPotentialOwner(user, "en-UK");
        TaskSummary taskSum = null;
        for( TaskSummary taskSumElem : taskSumList ) {
        if( taskSumElem.getProcessInstanceId().equals(procInstId) ) {
        taskSum = taskSumElem;
        }
        }
        long taskId = taskSum.getId();
        logger.debug("Found task " + taskId);
        
        // get other info from task if you want to
        Task task = taskService.getTaskById(taskId);
        logger.debug("Retrieved task " + taskId );
        
        taskService.start(taskId, user);
        
        Map<String, Object> resultData = new HashMap<String, Object>();
        // insert results for task in resultData
        taskService.complete(taskId, user, resultData);
        logger.debug("Completed task " + taskId );
        }

Starting with this release, a simple webservice has been added to the remote API.

As mentioned above, the Remote Java API provides client-like instances of the RuntimeEngine, KieSession, TaskService and AuditService interfaces.

This means that while many of the methods in those interfaces are available, some are not. The following tables lists the methods which are available. Methods not listed in the below, will throw an UnsupportedOperationException explaining that the called method is not available.







REST API calls to the execution server allow you to remotely manage processes and tasks and retrieve various dynamic information from the execution server. The majority of the calls are synchronous, which means that the call will only finish once the requested operation has completed on the server. The exceptions to this are the deployment POST calls, which will return the status of the request while the actual operation requested will asynchronously execute.

When using Java code to interface with the REST API, the classes used in POST operations or otherwise returned by various operations can be found in the (org.kie.remote:)kie-services-client JAR.

As of the community 6.2.0.Final release, users now need to be assigned one of the following roles in order to be able to access REST URLs:


Specifically, the roles give the user access to the following URLs:

Table 17.8. REST permission roles

RoleDescription

rest-all

All REST URLs

rest-project

Only the following URLs are available with this role:

rest-deployment

Only the following URLs are available with this role:

rest-process

In short, only URLs that start with one of the following are available with this role:

.../rest/runtime/deployment/{deploymentId}/*
.../rest/history/*

Only the following URLs are available with this role:

rest-process-read-only

In short, all GET URLs that start with one of the following are available with this role:

.../rest/runtime/deployment/{deploymentId}/*
.../rest/history/*

Only the following URLs are available with this role:

rest-task

In short, all URLs that start with the following are available with this role:

.../rest/task/*

Only the following URLs are available with this role:

rest-task-read-only

In short, all GET URLs that start with one of the following are available with this role:

.../rest/task/*

Only the following URLs are available with this role:

rest-query

Only the following URLs are available with this role:

rest-client

Only the following URLs are available with this role:

This URL is used by the Java remote API to communicate with the server. Use of this URL without the Java remote API code is not recommended!


This section lists REST calls that interface process instances.

The deploymentId component of the REST calls below must conform to the following regular expression:

[\w\.-]+(:[\w\.-]+){2,2}(:[\w\.-]*){0,2}

For more information about the composition of the deployment id, see the Deployment Calls section.

[POST] /runtime/{deploymentId}/process/{processDefId}/start

[GET] rest/runtime/{deploymentId}/process/{processDefId}/startform

[GET] /runtime/{deploymentId}/process/instance/{procInstId}

[POST] /runtime/{deploymentId}/process/instance/{procInstId+}/abort

[POST] /runtime/{deploymentId}/process/instance/{procInstId}/signal

[GET] /runtime/{deploymentId}/process/instance/{procInstId}/variable/{varName}

[POST] /runtime/{deploymentId}/signal

[GET] /runtime/{deploymentId}/workitem/{workItemId}

[POST] /runtime/{deploymentId}/workitem/{workItemId}/complete

[POST] /runtime/{deploymentId}/workitem/{workItemId: [0-9-]+}/abort

[POST] /history/clear

[GET] /history/instances

[GET] /history/instance/{procInstId}

[GET] /history/instance/{procInstId}/child

[GET] /history/instance/{procInstId}/node

[GET] /history/instance/{procInstId}/variable

[GET] /history/instance/{procInstId}/node/{nodeId}

[GET] /history/instance/{procInstId}/variable/{varId}

[GET] /history/process/{processDefId}

[GET] /history/variable/{varId}

[GET] /history/variable/{varId}/value/{value}

[GET] /history/variable/{varId}/instances

[GET] /history/variable/{varId}/value/{value}/instances

[GET] /runtime/{deploymentId}/history/variable/{varId}

[GET] /runtime/{deploymentId}/history/variable/{varId}/value/{value}

[GET] /runtime/{deploymentId}/history/variable/{varId}/instances

[GET] /runtime/{deploymentId}/history/variable/{varId}/value/{value}/instances

The following section describes the three different types of task calls: * Task REST operations that mirror the TaskService interface, allowing the user to interact with the remote TaskService instance * The Task query REST operation, that allows users to query for Task instances * Other Task REST operations that retrieve information

Task operation authorizations. Task REST operations use the user information (used to authorize and authenticate the HTTP call) to check whether or not the requested operations can happen. This also applies to REST calls that retrieve information, such as the task query operation. REST calls that request information will only return information about tasks that the user is allowed to see.

With regards to retrieving information, only users associated with a task may retrieve information about the task. However, the authorizations of progress and other modifications of task information are more complex. See the Task Permissions section in the Task Service documentation for more infomration.

Note

Given that many users have expressed the wish for a "super-task-user" that can execute task REST operations on all tasks, regardless of the users associated with the task, there are now plans to implement that feature. However, so far for the 6.x releases, this feature is not available.

All of the task operation calls described in this section use the user (id) used in the REST basic authorization as input for the user parameter in the specific call.

Some of the operations take an optional lanaguage query parameter. If this parameter is not given as a element of the URL itself, the default value of “en-UK” is used.

The taskId component of the REST calls below must conform to the following regex:

[0-9]+

[POST] /task/{taskId}/activate

[POST] /task/{taskId}/claim

[POST] /task/{taskId}/claimnextavailable

[POST] /task/{taskId}/complete - Completes a task - Returns a JaxbGenericResponse with the status of the operation - Parameters: Takes map query parameters, which are the "results" input for the complete operation

[POST] /task/{taskId}/delegate

[POST] /task/{taskId}/exit

[POST] /task/{taskId}/fail

[POST] /task/{taskId}/forward

[POST] /task/{taskId}/nominate

[POST] /task/{taskId}/release

[POST] /task/{taskId}/resume

[POST] /task/{taskId}/skip

[POST] /task/{taskId}/start

[POST] /task/{taskId}