piątek, 15 kwietnia 2016

KIE Server clustering and scalability

This article is another in KIE Server series and this time we'll focus on clustering and scalability.

KIE Server by its nature is lightweight and easily scalable component. Comparing to execution environment included in KIE workbench it can be summarized with following:

  • allows to partition based on deployed containers (kjars)
    • in workbech all containers are deployed to same runtime
  • allows to scale individual instances independently from each other
    • in workbench scaling workbench means scaling all kjars
  • can be easily distributed across network and be managed by controller (workbench by default)
    • workbench is both management and execution which makes it a single point of failure
  • clustering of KIE Server does not include any additional components in the infrastructure 
    • workbench requires Zookeeper and Helix for clustered git repository
So what does it mean to scale KIE Server?
First of all it allows administrators to partition knowledge between different KIE Server instances. With that said, HR department related processes and rules can run on one set of KIE Server instances, while Finance department will have its own set of KIE Server instances. By that each department's administrator can easily scale based on the needs without affecting each other. That gives us unique opportunity to really focus on the components that do require additional processing power and simply add more instances - either on the same server or on different distributed across your network.

Let's look at most common runtime architecture for scalable KIE Server environment


As described above the basic runtime architecture will consists of multiple independent sets of KIE Servers where number of actual server instances can vary. In the above diagram all of them have three instances but in reality they can have as many (or as little) as needed.

Controller in turn will have three server templates - HR, Finance and IT. Each server template is then defined with identifier which is used by KIE Server instances via system property called org.kie.server.id.

In above screenshot, server templates are defined in the controller (workbench) which becomes single point of configuration and management of our KIE Servers. So administrators can add or remove, start or stop different containers and controller is responsible for notifying all KIE Server instances (that belongs to given server template) with performed operations. Moreover when new KIE server instances are added to the set they will directly receive all containers that should be started and by that increase processing power.

As mentioned, this is the basic setup, meaning actual usage of the server instances is by calling them directly - each individual KIE Server instance. This is a bit troublesome as users/callers will have to deal with instances that are down etc. So to solve this we can put load balancer in front of the kie servers and then utilize that load balancer to the heavy lifting for us. So users will simply call single URL which is then configured to work with all instances in the back end. One of the choices of a load balancer is Apache HTTP with ModCluster plugin for efficient and highly configurable load balancer.


In version 7, KIE Server client will come with pluggable load balancer implementation so whenever using KIE Server client users could simply skip additional load balancer as infrastructure component. Though it will provide load balancing and failure discovery support it's client side load balancer which has no knowledge of underlying backend servers and thus won't be as efficient as mod cluster can be.

So this covers, scalability of KIE Server instances as they can be easily multiplied to provide more power for execution and at the same time distribution both on the network and knowledge (containers) level. But looking at the diagram, a single point of failure is the controller. Remember that in managed mode (where KIE Server instances depend on controller) they are limited in case controller is down. Let's recap on how KIE Server interacts with controller:

  • when KIE Server starts it attempts to connect to any of the defined controllers (if any)
  • it will only connect to one once connection is successful
  • controller will then provide list of containers to deploy and configuration
  • based on this information KIE Server deploys and starts to serve requests
But what happens when none of the controllers can be reached when KIE Server starts? KIE Server will be pretty much useless as it does not know what container it should deploy. And will keep checking (with predefined intervals) if controller is available. So until controller is not available KIE Server has not containers deployed and thus won't process any requests - most likely response you'll get from KIE Server when trying to use it will be - no container found.

Note: This affects only KIE Servers that starts after controller went down, those that are currently running are not affected at all.

So to solve this problem workbench (and by that controller) should be scaled. Here default configuration of KIE workbench cluster applies, meaning with Apache Zookeeper and Apache Helix as part of the infrastructure.


In this diagram, we scale workbench by using Apache Zookeeper and Helix for cluster of GIT repository. This gives us replication between server instances (that runs workbench) and by that provides several controller endpoints (which are synchronized) to secure KIE Server instances can reach the controller and collect configuration and containers to be deployed.

Similar as it was for KIE Servers, controller can be either reached directly by independent endpoints or again fronted with load balancer. KIE Server allows to be given list of controllers so load balancer is not strictly required though recommended as workbench is also (or even primarily) used by end users who would be interested in load balanced environment as well.

So that would conclude the description of clustering and scalability of KIE server to gain most of it, let's now take a quick look what's important to know when configuring such setup.

Configuration

Workbench
We start with configuration of workbench - controller. The most important for controller is the authentication so connecting KIE Server instances will be authorized. By default KIE Server upon start will send request with Basic authentication corresponding to following credentials:
  • username: kieserver
  • password: kieserver1!
so to allow KIE Server to connect, make sure such user exists in application realm of your application server.

NOTE: the username and password can be changed on KIE Server side by setting following system properties:
  • org.kie.server.controller.user
  • org.kie.server.controller.pwd

This is the only thing needed on application server that hosts KIE workbench.

KIE Server
On KIE Server side, there are several properties that must be set on each KIE Server instance. Some of these properties must be same for all instances representing same server template defined in the controller.
  • org.kie.server.id - identifier of the kie server that corresponds to server template id. this must be exactly the same for all KIE Server instances that represent given server template
  • org.kie.server.controller - comma separated list of absolute URL to the controller(s). this must be the same for all KIE Server instances that represents given server template
  • org.kie.server.location - absolute URL where this KIE Server instance can be reached. This must be unique for each KIE Server instances as it's going to be used by controller to notify requested changes (e.g start/stop container). 
Similar to how workbench authenticates request KIE Server does the same, so to allow controller to connect to KIE Server instance (based on given URL as org.kie.server.location) application realm of the server where KIE Server instances are running must have configured user. By default workbench (controller) will use following credentials:
  • username: kieserver
  • password: kieserver1!
so it must exist in application realm. In addition it must be member of kie-server role so KIE Server will authorize it to its REST api.

NOTE: the username and password can be changed on KIE Workbench side by setting following system properties:
  • org.kie.server.user
  • org.kie.server.pwd
There are other system properties that can be set (and most likely will be needed depending on what configuration of KIE Server is needed). For those look at the documentation.

This configuration applies to any way you run KIE Server - on standalone Wildfly, domain mode of Wildfly, Tomcat, WAS or WebLogic. It does not really matter as long as you follow the set of properties you'll be ready to go with clustered and scalable KIE Server instances that are tailored to your domain.

That would be all for today, as usual comments are more than welcome.