piątek, 22 listopada 2013

jBPM 6 first steps

This post is about to give a very quick introduction to how users can take their first steps in jBPM 6. Using completely web tooling to build up:

  • processes
  • rules
  • process and task forms
  • data model
With just three simple examples you will learn how easy and quickly you can start with BPM. So let's start.

The simples process

First process is illustrating how you move around in KIE workbench web application. Where to:
  • create repository
  • create project
  • configure Knowledge Base, KnowledgeSession
  • create process
  • build and deploy
  • execute process and work with user task



Custom data and forms

Next let's explore more and start with bit advanced features like:

  • building custom data model that will be used as process variable
  • make use of process variables in user task
  • define custom forms for process and tasks
  • edit and adjust your process and task forms


Make use of business rules and decisions in your process

At the end let's make the process be more efficient by applying business rules in the process and then use gateways as deception points. This example introduces:

  • use of business rule task
  • define business rules with Drools
  • use XOR gateway to split between different paths in the process


Important to note that business rule task can automatically insert and retract process variables using data input and output of business rule task. When defining them make sure that both data input and output are named exactly the same to allow engine to properly retract the facts on business rule task completion.

That would be all for the first steps with jBPM 6. Stay tuned for more examples and videos!

As usual comments more than welcome.


poniedziałek, 7 października 2013

jBPM 6 workshops

I would like to inform that there will be some workshops regarding upcoming jBPM version 6 where you can gather some insight into what's in it for you.

There are currently two workshops scheduled:

  • 12 of October in Warsaw, Poland
  • 23-24 of October in London, UK

Workshop in Poland will be carried as part of Warsjava 2013 conference where besides "Introduction to jBPM 6" a lot more can be found. Unfortunately the conference is by default in Polish but I guess when there will be any non polish speaking attendees there won't be any issues to take it in English. I'll be giving the presentation and workshop for jBPM 6 at this year's Warsjava.

Workshops in London will be taken obviously in English and there will be lot of possibilities to learn a lot about the development in the projects. Presented by Mauricio "Salaboy" Salatino and Michael Anstis so an event you can't miss.

Please take a look at the content and register as places are limited.

wtorek, 1 października 2013

jBPM empowers Magnolia CMS

I am glad to inform that Magnolia CMS uses jBPM 5 as their default work flow engine in version 5. Just two weeks ago I had a pleasure to talk about jBPM (both v5 and v6) at Magnolia conference in Basel, Switzerland. This was a great event that I recommend everyone that is interested in CMS.

Together with Espen from Magnolia team, we made a really nice presentation about both jBPM and Magnolia Workflow that utilizes jBPM.

Here you can find the presentation and as soon as recording will be available I'll link it here as well.

Stay tuned for more updates and information about jBPM :)

poniedziałek, 2 września 2013

jBPM6 samples with RuntimeManager

jBPM6 introduces new module - jbpm-runtime-manager - that aims at significantly simplify management of:

  • KnowledgeBase (KieBase)
  • KnowledgeSession (KieSession)
Moreover it allows to use predefined strategies for handling knowledge sessions and its relation to process instance. By default jBPM6 comes with three strategies:
  • Singleton - single knowledge session that will execute all process instances
  • Per Request - every request (which is in fact call to getRuntimeEngine) will get new knowledge session
  • Per Process Instance - every process instance will have its dedicated knowledge session for the entire life time
To make use of the strategy it's enough to create proper type of RuntimeManager. jBPM6 allows to obtain instance of the RuntimeManager in various ways, this article will provide hands on information on how it can be achieved. 
With jBPM6 a whole new way of building application has been provided - Context and Dependency Injection is now available for users to build application and bring the power of jBPM to next level. Obviously CDI is not the only way to make use of jBPM - the regular API based approach is still available and fully functional.

CDI integration

jBPM6 tooling (like jbpm console or kie-wb) is built on CDI and thanks to that set of services has been provided to ease the development for custom applications that are based on CDI. These services are bundled as part of jbpm-kie-services and provides compact solution to most of the required operations to put BPM into your application:
  • deployment service - deploys and undeploys units (kjars) into the process engine
  • runtime service - gives access to state of the process engine, like retrieve process definitions, process instances, history log, etc
  • bpmn2 data service - gives access to details of the process definition taken from BPMN2 xml
  • form provider service - gives access to forms for processes and tasks
So whenever custom application is built with CDI these services are recommended way to go to get most of the power of CDI and jBPM. Moreover it's the safest way too as it is used in jBPM tooling so it got fair amount of testing to secure it does what is expected.

Using jbpm-kie-services is not a mandatory to be able to use jBPM6 in CDI environment but it does have some advantages:
- allow to maintain multiple RuntimeManagers within single execution environment
- allow independent deploy and undeploy of units (kjar) without server/application restart
- allow to select different strategies for different units

See jbpm-sample-cdi-services project for details.

while these are all appealing add-ons they are not always needed, especially if application requires only single RuntimeManager instance to be included in the application. If that's the case we can let CDI container to create the RuntimeManager instance for us. That is considered second approach to CDI integration where only single instance of RuntimeManager is active and it's managed completely by the CDI container. Application needs to provide environment that RuntimeManager will be based on. 

See jbpm-sample-cdi project for details.

API approach

Last but not least is the regular API based approach to jBPM6 and RuntimeManager. It expects to be built by the application and in fact provides all configuration options. Moreover this is the simplest way to extend the functionality of RuntimeManager that comes out of the box.

See jbpm-sample project for details.

This article is just an introduction to the way jBPM6 and RuntimeManager can be used. More detailed articles will follow to provide in-depth information on every option given here (cdi with services, pure cdi, api).

If you have any aspects that you would like to see in next articles (regarding runtime manager and CDI) just drop a comment here and I'll do by best to include them.

wtorek, 20 sierpnia 2013

Make your work asynchronous

Asynchronous execution as part of a business process is common requirement. jBPM has had support for it via custom implementation of WorkItemHandler. In general it was as simple as providing async handler (is it as simple as it sounds?) that delegates the actual work to some worker e.g. a separate thread that proceeds with the execution.

Before we dig into details on jBPM v6 support for asynchronous execution let's look at what are the common requirements for such execution:

  • first and foremost it allows asynchronous execution of given piece of business logic
  • it allows to retry in case of resources are temporarily unavailable e.g. external system interaction
  • it allows to handle errors in case all retries has been attempted
  • it provides cancelation option
  • it provides history log of execution
When confronting these requirements with the "simple async handler" we can directly notice that all of these would need to be implemented all over again by different systems. So that is not so appealing, isn't?

jBPM executor to the rescue 

Since version 6, jBPM introduces new component called jbpm executor which provides quite advanced features for asynchronous execution. It delivers generic environment for background execution of commands. Commands are nothing more than business logic encapsulated with simple interface. It does not have any process runtime related information, that means no need to complete work items, or anything of that sort. It purely focuses on the business logic to be executed. It receives data via CommandContext and returns results of the execution with ExecutionResults. The most important rule for both input and output data is - it must be serializable.
Executor covers all requirements listed above and provides user interface as part of jbpm console and kie workbench (kie-wb) applications.

Illustrates Jobs panel in kie-wb application

Above screenshot illustrates history view of executor's job queue. As can be seen on it there are several options available:
  • view details of the job
  • cancel given job
  • create new job
With that quite few things can already be achieved. But what about executing logic as part of a process instance - via work item handler?

Async work item handler

jBPM (again since version 6) provides an out of the box async work item handler that is backed by the jbpm executor. So by default all features that executor delivers will be available for background execution within process instance. AsyncWorkItemHandler can be configured in two ways:
  1. as generic handler that expects to get the command name as part of work item parameters
  2. as specific handler for given type of work item - for example web service
Option number 1 is by default configured for jbpm console and kie-wb web applications and is registered under async name in every ksession that is bootstrapped within the applications. So whenever there is a need to execute some logic asynchronously following needs to be done at modeling time (using jbpm web designer):
  • specify async as TaskName property 
  • create data input called CommandClass
  • assign fully qualified class name for the CommandClass data input
Next follow regular way to complete process modeling. Note that all data inputs will be transferred to executor so they must be serializable.
Illustrates assignments for an async node (web service execution)

Second option allows to register different instances of AsyncWorkItemHandler for different work items. Since it's registered for dedicated work item most likely the command will be dedicated to that work item as well. If so CommandClass can be specified on registration time instead of requiring it to be set as work item parameters. To register such handlers for jbpm console or kie-wb additional class is required to inform what shall be registered. A CDI bean that implements WorkItemHandlerProducer interface needs to be provided and placed on the application classpath so CDI container will be able to find it. Then at modeling time TaskName property needs to be aligned with those used at registration time.

Ready to give it a try?

To see this working it's enough to give a try to the latest kie-wb or jbpm console build (either master or CR2). As soon as application is deployed, go to Authoring perspective and you'll find an async-examples project in jbpm-playground repository. It comes with three samples that illustrates asynchronous execution from within process instance:
  • async executor
  • async data executor
  • check weather
Async executor is the simplest execution process that allows execute commands asynchronously. When starting a process instance it will ask for fully qualified class name of the command, for demo purpose use org.jbpm.executor.commands.PrintOutCommand which is similar to the SystemOutWorkItemHandler that simple prints out to logs the content of the CommandContext. You can leave it empty or provide invalid command class name to see the error handling mechanism (using boundary error event).

Async data executor is preatty much same as Async executor but it does operate on custom data (included in the project - User and UserCommand). On start process form use org.jbpm.examples.cmd.UserCommand to invoke custom command included in the project.

Check weather is asynchronous execution of a web service call. It checks weather for any U.S. zip code and provides results as a human task. So on start form specify who should receive user task with results and what is the zip code of the city you would like to get weather forecast for.


Start Check weather process with async web service execution


And that's it, asynchronous execution is now available out of the box in jBPM v6. 

Have fun and as usual keep the comments coming so we can add more useful features!

wtorek, 11 czerwca 2013

Clustering in jBPM v6


Clustering in jBPM v5 was not an easy task, there where several known issues that had to be resolved on client (project that was implementing solution with jBPM) side, to name few:

  • session management - when to load/dispose knowledge session
  • timer management - required to keep knowledge session active to fire timers
This is not the case any more in version 6 where several improvements made their place in code base, for example new module that is responsible for complete session management was introduced - jbpm runtime manager. More on runtime manager in next post, this one focuses on how clustered solution might look like. First of all lets start with all the important pieces jbpm environment consists of:


  1. asset repository - VFS based repository backed with GIT - this is where all the assets are stored during authoring phase
  2. jbpm server that includes JBoss AS7 with deployed jbpm console (bpm focused web application) of kie-wb (fully features web application that combines BPM and BRM worlds)
  3. data base - backend where all the state date is kept (process instances, ksessions, history log, etc)

Repository clustering

Asset repository is GIT backed virtual file system (VFS) that keeps all the assets (process definitions, rules, data model, forms, etc) is reliable and efficient way. Anyone who used to work with GIT understands perfectly how good it is for source management and what else assets are if not source code?
So if that is file system it resides on the same machine as the server that uses it, that enforces it to be kept in sync between all servers of a cluster. For that jbpm makes use of two well know open source projects:

Zookeeper is responsible for gluing all parts together where Helix is cluster management component that registers all cluster details (cluster itself, nodes, resources).

So this two components are utilized by the runtime environment on which jbpm v6 is based on:
  • kie-commons - provides VFS implementation and clustering 
  • uber fire framework - provides backbone of the web applications

So let's take a look at what we need to do to setup cluster of our VFS:

Get the software

  • download Apache Zookeeper (note that 3.3.4 and 3.3.5 are the only versions that were currently tested so make sure you get the correct version)
  • download Apache Helix  (note that version that was tested was 0.6.1)

Install and configure

  • unzip Apache Zookeeper into desired location - ( from now one we refer to it as zookeeper_home)
  • go to zookeeper_home/conf and make a copy of zoo_sample.conf to zoo.conf
  • edit zoo.conf and adjust settings if needed, these two are important in most of the cases:
# the directory where the snapshot is stored.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181

  •  unzip Apache helix to into desired location (from now one we refer to it as helix_home)

Setup cluster

Now we have all the software available locally so next step is to configure the cluster itself. We start with start of the Zookeeper server that will be master of the configuration of the cluster:
  • go to zookeeper_home/bin
  • execute following command to start zookeeper server:
sudo ./zkServer.sh start
  • zookeeper server should be started, if the server fails to start make sure that the data directory defined in zoo.conf file exists and is accessible
  • all zookeeper activities can be viewed zookeeper_home/bin/zookeeper.out
To do so, Apache Helix provides utility scripts that can be found in helix_home/bin.

  • go to helix_home/bin
  • create cluster
./helix-admin.sh --zkSvr localhost:2181 --addCluster jbpm-cluster
  • add nodes to the cluster 
node 1
./helix-admin.sh --zkSvr localhost:2181 --addNode jbpm-cluster nodeOne:12345
node2
    ./helix-admin.sh --zkSvr localhost:2181 --addNode jbpm-cluster nodeTwo:12346
add as many nodes as you will have cluster members of jBPM server (in most cases number of application servers in the cluster)
NOTE: nodeOne:12345 is the unique identifier of the node, that will be referenced later on when configuring application severs, although it looks like host and port number it is use to identify uniquely logical node.
  • add resources to the cluster
./helix-admin.sh --zkSvr localhost:2181 
           --addResource jbpm-cluster vfs-repo 1 LeaderStandby AUTO_REBALANCE
  • rebalance cluster to initialize it
./helix-admin.sh --zkSvr localhost:2181 --rebalance jbpm-cluster vfs-repo 2

  • start the Helix controller to manage the cluster
./run-helix-controller.sh --zkSvr localhost:2181 
                        --cluster jbpm-cluster 2>&1 > /tmp/controller.log &
Values given above are just examples and can be changed according to the needs:
cluster name: jbpm-cluster
node name: nodeOne:12345, nodeTwo:12346
resource name: vfs-repo
zkSvr value must match Zookeeper server that is used.

Prepare data base 


Before we start with application server configuration data base needs to be prepared, for this example we use PostgreSQL data base. jBPM server will create all required tables itself by default so there is no big work required for this but some simple tasks must be done before starting the server configuration.

Create data base user and data base

First of all PostgreSQL data base needs to be installed, next user needs to be created on the data base that will own the jbpm schema, in this example we use:
user name: jbpm
password: jbpm

Once the user is ready, data base can be created, and again for the example jbpm is chosen for the data base name.

NOTE: this information (username, password, data base name) will be used later on in application server configuration.

Create Quartz tables

Lastly Quartz related tables must be created, to do so best is to utilize the data base scripts provided with Quartz distribution, jbpm uses Quartz 1.8.5. DB scripts are usually located under QUARTZ_HOME/docs/dbTables.

Create quartz definition file 

Quartz configuration that will be used by the jbpm server needs to accomodate the needs of the environment, as this guide is about to show the basic setup obviously it will not cover all the needs but will allow for further improvements.

Here is a sample configuration used in this setup:
#============================================================================
# Configure Main Scheduler Properties  
#============================================================================

org.quartz.scheduler.instanceName = jBPMClusteredScheduler
org.quartz.scheduler.instanceId = AUTO

#============================================================================
# Configure ThreadPool  
#============================================================================

org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 5
org.quartz.threadPool.threadPriority = 5

#============================================================================
# Configure JobStore  
#============================================================================

org.quartz.jobStore.misfireThreshold = 60000

org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreCMT
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
org.quartz.jobStore.useProperties=false
org.quartz.jobStore.dataSource=managedDS
org.quartz.jobStore.nonManagedTXDataSource=notManagedDS
org.quartz.jobStore.tablePrefix=QRTZ_
org.quartz.jobStore.isClustered=true
org.quartz.jobStore.clusterCheckinInterval = 20000

#============================================================================
# Configure Datasources  
#============================================================================
org.quartz.dataSource.managedDS.jndiURL=jboss/datasources/psjbpmDS
org.quartz.dataSource.notManagedDS.jndiURL=jboss/datasources/quartzNotManagedDS


Configure JBoss AS 7 domain


1. Create JDBC driver module - for this example PostgreSQL
a) go to JBOSS_HOME/modules directory (on EAP JBOSS_HOME/modules/system/layers/base)
b) create module folder org/postgresql/main
c) copy postgresql driver jar into the module folder (org/postgresql/main) as postgresql-jdbc.jar          name
d) create module.xml file inside module folder (org/postgresql/main) with following content:
         <module xmlns="urn:jboss:module:1.0" name="org.postgresql">
         <resources>
           <resource-root path="postgresql-jdbc.jar"/>
         </resources>

             <dependencies>
      <module name="javax.api"/>
      <module name="javax.transaction.api"/>
         </dependencies>
</module>

2. Configure data sources for jbpm server
a) go to JBOSS_HOME/domain/configuration
b) edit domain.xml file
for simplicity sake we use default domain configuration which uses profile "full" that defines two 
        server nodes as part of main-server-group
c) locate the profile "full" inside the domain.xml file and add new data sources
main data source used by jbpm
   <datasource jndi-name="java:jboss/datasources/psjbpmDS" 
                pool-name="postgresDS" enabled="true" use-java-context="true">
        <connection-url>jdbc:postgresql://localhost:5432/jbpm</connection-url>
        <driver>postgres</driver>
        <security>
            <user-name>jbpm</user-name>
                <password>jbpm</password>
        </security>
   </datasource>
        
        additional data source for quartz (non managed pool)
        <datasource jta="false" jndi-name="java:jboss/datasources/quartzNotManagedDS"   
           pool-name="quartzNotManagedDS" enabled="true" use-java-context="true">
      <connection-url>jdbc:postgresql://localhost:5432/jbpm</connection-url>
        <driver>postgres</driver>
        <security>
         <user-name>jbpm</user-name>
                <password>jbpm</password>
        </security>
    </datasource>
defined the driver used for the data sources
<driver name="postgres" module="org.postgresql">
      <xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class>
    </driver>
        
3. Configure security domain 
     a) go to JBOSS_HOME/domain/configuration
     b) edit domain.xml file
for simplicity sake we use default domain configuration which uses profile "full" that defines two
        server nodes as part of main-server-group
      
     c) locate the profile "full" inside the domain.xml file and add new security domain to define security 
         domain for jbpm-console (or kie-wb) - this is just a copy of the "other" security domain defined 
         there by default
    
<security-domain name="jbpm-console-ng" cache-type="default"> <authentication>
            <login-module code="Remoting" flag="optional">
               <module-option name="password-stacking" value="useFirstPass"/>
            </login-module>
            <login-module code="RealmDirect" flag="required">
                  <module-option name="password-stacking" value="useFirstPass"/>
            </login-module>
        </authentication>
    </security-domain>  
        
for kie-wb application, simply replace jbpm-console-ng with kie-ide as name of the security domain.  
        
4. Configure server nodes

    a) go to JBOSS_HOME/domain/configuration
    b) edit host.xml file
    c) locate servers that belongs to "main-server-group" in host.xml file and add following system  
        properties:
    


property nameproperty valuecomments
org.uberfire.nio.git.dir/home/jbpm/node[N]/repolocation where the VFS asset repository will be stored for the node[N]
org.quartz.properties/jbpm/quartz-definition.propertiesabsolute file path to the quartz definition properties
jboss.node.namenodeOneunique node name within cluster (nodeOne, nodeTwo, etc)
org.uberfire.cluster.idjbpm-clustername of the helix cluster
org.uberfire.cluster.zklocalhost:2181location of the zookeeper server
org.uberfire.cluster.local.idnodeOne_12345unique id of the helix cluster node, note that ':' is replaced with '_'
org.uberfire.cluster.vfs.lockvfs-reponame of the resource defined on helix cluster
org.uberfire.nio.git.daemon.port9418port used by the GIT repo to accept client connections, must be unique for each cluster member
org.uberfire.nio.git.ssh.port8001port used by the GIT repo to accept client connections (over ssh), must be unique for each cluster member
org.uberfire.nio.git.daemon.hostlocalhosthost used by the GIT repo to accept client connections, in case cluster members run on different machines this property must be set to actual host name instead of localhost otherwise synchronization won't work
org.uberfire.nio.git.ssh.hostlocalhosthost used by the GIT repo to accept client connections (over ssh), in case cluster members run on different machines this property must be set to actual host name instead of localhost otherwise synchronization won't work
org.uberfire.metadata.index.dir/home/jbpm/node[N]/indexlocation where index for search will be created (maintained by Apache Lucene)
org.uberfire.cluster.autostartfalsedelays VFS clustering until the application is fully initialized to avoid conficts when all cluster members create local clones
    
examples for the two nodes:
    
  •     nodeOne
<system-properties>
  <property name="org.uberfire.nio.git.dir" value="/tmp/jbpm/nodeone" 
                                       boot-time="false"/>
  <property name="org.quartz.properties" 
      value="/tmp/jbpm/quartz/quartz-db-postgres.properties" boot-time="false"/>
  <property name="jboss.node.name" value="nodeOne" boot-time="false"/>
  <property name="org.uberfire.cluster.id" value="jbpm-cluster" 
                                           boot-time="false"/>
    <property name="org.uberfire.cluster.zk" value="localhost:2181" 
                                           boot-time="false"/>
  <property name="org.uberfire.cluster.local.id" value="nodeOne_12345" 
                                                 boot-time="false"/>
  <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" 
                                                 boot-time="false"/>
  <property name="org.uberfire.nio.git.daemon.port" value="9418" boot-time="false"/>
  <property name="org.uberfire.metadata.index.dir" value="/tmp/jbpm/nodeone" boot-time="false"/>
  <property name="org.uberfire.cluster.autostart" value="false" boot-time="false"/>
</system-properties>
    
  •     nodeTwo
<system-properties>
    <property name="org.uberfire.nio.git.dir" value="/tmp/jbpm/nodetwo" 
                                         boot-time="false"/>
    <property name="org.quartz.properties" 
       value="/tmp/jbpm/quartz/quartz-db-postgres.properties" boot-time="false"/>
    <property name="jboss.node.name" value="nodeTwo" boot-time="false"/>
    <property name="org.uberfire.cluster.id" value="jbpm-cluster" 
                                             boot-time="false"/>
    <property name="org.uberfire.cluster.zk" value="localhost:2181" 
                                             boot-time="false"/>
    <property name="org.uberfire.cluster.local.id" value="nodeTwo_12346" 
                                                   boot-time="false"/>
    <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" 
                                                   boot-time="false"/>
    <property name="org.uberfire.nio.git.daemon.port" value="9419" boot-time="false"/>
    <property name="org.uberfire.metadata.index.dir" value="/tmp/jbpm/nodetwo" boot-
     time="false"/>
    <property name="org.uberfire.cluster.autostart" value="false" boot-time="false"/>
</system-properties>

NOTE: since this example runs on single node host properties for ssh and git daemons are omitted.

Since repository synchronization is done between git servers make sure that GIT daemons are active (and properly configured - host name and port) on every cluster member.
5. Create user(s) and assign it to proper roles on application server

Add application users
In previous step security domain has been created so jbpm console (or kie-wb) users could be authenticated while logging on. Now it's time to add some users to be able to logon to the application once it's deployed. To do so:
 a) go to JBOSS_HOME/bin
 b) execute ./add-user.sh script and follow the instructions on the screen
  - use Application realm not management
  - when asked for roles, make sure you assign at least:
  for jbpm-console: jbpm-console-user
  for kie-wb: kie-user
 
add as many users you need, same goes for roles, listed above are required to be authorized to use the web application. 

Add management (of application server) user
To be able to manage the application server as domain, we need to add administrator user, it's similar to what was defined for adding application users but the realm needs to be management
 a) go to JBOSS_HOME/bin
 b) execute ./add-user.sh script and follow the instructions on the screen
  - use Management realm not application

Application server should be now ready to be used, so let's start the domain:

JBOSS_HOME/bin/domain.sh

after few seconds (it's still empty servers) you should be able to access both server nodes on following locations:
administration console: http://localhost:9990/console

the port offset is configurable in host.xml for given server.


Deploy application - jBPM console (or kie-wb)

Now it's time to prepare and deploy application, either jbpm-console or kie-wb. As by default both application comes with predefined persistence that uses ExampleDS from AS7 and H2 data base there is a need to alter this configuration to use PostgreSQL data base instead.

Required changes in persistence.xml

  • change jta-data-source name to match one defined on application server
             java:jboss/datasources/psjbpmDS
  • change hibernate dialect to be postgresql 
             org.hibernate.dialect.PostgreSQLDialect

Application build from source

If the application is built from source then you need to edit persistence.xml file that is located under:
jbpm-console-ng/jbpm-console-ng-distribution-wars/src/main/jbossas7/WEB-INF/classes/META-INF/
next rebuild the jbpm-distribution-wars module to prepare deployable package - once that is named: 
jbpm-console-ng-jboss-as7.0.war

Deployable package downloaded

In case you have deployable package downloaded (which is already a war file) you need to extract it change the persistence.xml located under:
WEB-INF/classes/META-INF
once the file is edited and contains correct values to work properly with PostgreSQL data base application needs to be repackaged:
NOTE: before repackaging make use that previous war is not in the same directory otherwise it will be packaged into new war too.

jar -cfm jbpm-console-ng.war META-INF/MANIFEST.MF *

IMPORTANT: make sure that you include the same manifest file that was in original war file as it contains valuable entires.


To deploy application logon as management user into administration console of the domain and add new deployments using Runtime view of console. Once the deployment is added to the domain, assign it to the right server group - in this example we used main-server-group it will be default enable this deployment on all servers within that group - meaning deploy it on the servers. This will take a while and after successful deployment you should be able to access jbpm-console (or kie-wb) on following locations:


the context root (jbpm-console-ng) depends on the name of the war file that was deployed so if the filename will be jbpm-console-ng-jboss7.war then the context root will be jbpm-console-ng-jboss7. Same rule apply to kie-wb deployment.

And that's it - you should have fully operational jbpm cluster environment!!!

Obviously in normal scenarios you would like to hide the complexity of different urls to the application from end users (like putting in front of them a load balancer) but I explicitly left that out of this example to show proper behavior of independent cluster nodes.

Next post will go into details on how different components play smoothly in cluster, to name few:
  • failover - in case cluster node goes down
  • timer management - how does timer fire in cluster environment
  • session management - auto reactivation of session on demand
  • etc
As we are still in development mode, please share your thoughts on what would you like to see in cluster support for jBPM, your input is most appreciated!

Update:
There was a change in naming of system properties since the article was written so for those that configured it already for 6.0.0.Final there will be a need to adjust name of following system properties:

  • org.kie.nio.git.dir -> org.uberfire.nio.git.dir
  • org.kie.nio.git.daemon.port -> org.uberfire.nio.git.daemon.port
  • org.kie.kieora.index.dir -> org.uberfire.metadata.index.dir
  • org.uberfire.cluster.autostart - new parameter
Table above already contains proper values for 6.0.0.Final

piątek, 4 stycznia 2013

jBPM web designer runs on VFS

As part of efforts for jBPM and Drools version 6.0 web designer is going through quite few enhancements too. One of the major features is to provide flexible mechanism to persist modeled processes (and other assets that relate to them such as forms, process image, etc) a.k.a assets even without being embedded in Drools Guvnor.
So let's start with the main part here - what does it mean flexible mechanism to persists assets? To answers this let's look at what is currently (jBPM 5.x) available:

  • designer by default runs in embedded mode inside Drools Guvnor
  • designer stores all assets inside Drools Guvnor JCR repository
  • designer can run in standalone mode but only as modeling tool without capabilities to store assets
So as listed above there is only one option to persist assets - inside Drools Guvnor. In most of the cases this is good enough or even desired but there are quite few situation where modeling capabilities are required to be delivered with the custom application and including complete Drools Guvnor could be too much. 
That leads us to the flexible mechanism implemented - designer was equipped with Repository interface that is considered entry point to interact with underlying storage. Designer by default comes with Virtual File System based repository that provides:
  • default implementation that supports 
    • simple (local) file system repository
    • git based repository
  • allows for pluggable VFS provider implementations
  • is based on standards - java NIO2
Extensions to what is delivered out of the box can be done in one of the following ways:
  1. if VFS based repository is not what user needs an alternative implementation of the Repository interface can be provided, e.g data base
  2. if VFS is what user is looking for but neither local file system nor git is the right implementation additional providers can be developed
Let's look little bit deeper into what these new features are and how users will benefit from them. 

1. It's based on Java 7 NIO2


VFS support provides is based on Java SE 7 NIO2 but does not require Java 7 to run as it comes with backport implementation of selected parts of NIO2 that are required

2. Different providers for Virtual File System


The simplest option is to use designer with local file system storage that will simply utilize file system on which designer is running. As most likely it will provide the best performance it leaves user with rather limited options when it comes to clustering, distributed environments or backups.

Next option that personally would recommend is to utilize GIT as underlying storage. People that works with GIT in their software development projects will most likely notice quite few advantages as in the end process definitions are more like source code that can be versioned, developed in parallel (branching) and included in some sort of release cycle.

3. Save process directly in designer editor



Designer now allows users to save process directly from the editor which will store the svg process content as well with just one click!




4. Repository menu


With the repository designer provides a simple UI menu to navigate through the repository and perform basic operations such as:

  • open processes in designer editor
  • create assets and directories
  • copy/move assets
  • delete assets and directories
  • preview files

This menu is intended to be seen as basic file system browser utilized more in a standalone mode as when integrated with jBPM console-ng, guvnor-ng (UberFire) more advanced options will be delivered in this area.

5. Simpler integration with jBPM console and Drools Guvnor


Both jBPM console and Drools Guvnor are going to be "refreshed" for version 6.0 and thus integration between these components and designer will be simplified as they all will be unified on the repository level, meaning single repository can be shared across all these three components.



That will be all for a brief introduction but certainly not all in this topic. Expect more to come as soon as preview will be released on how to configure different repositories and more updates on git based repository on how to make best of it.

Your comments are more than welcome as they can help to make designer best modeling tool out there :)

Known limitation

Currently git based repository does not support move of assets and directories as atomic operation which means that preferred is to copy first and then delete.