Configuring application servers may not be as trivial as it seems. There are some configuration commands and parameters, which may not work as the users intuit. To make matters worse, configuring clusters not only involves tuning parameters, but also requires to deal with availability of servers, SSH configurations, operating application server in all nodes etc. Thus below we'll explain how to easily create GlassFish cluster with Docker and Jelastic using such solution as CloudScript.
For this example guide, we’ve chosen Oracle GlassFish application server, as it offers reference implementation of Java EE 7 and has a centralized way to operate clusters, applications, and configurations without the necessity to manage every cluster node. According to the official GlassFish documentation, administration node has the following architecture:
In the picture above, the Domain Administration Server (DAS) is the administration node of a cluster, which can communicate between cluster nodes by DCOM (in case of GlassFish clusters in Windows nodes) or by SSH (in case of Linux-, Solaris- and macOS-based cluster nodes). Also, there exists the third option, named CONFIG, intended to manage each node locally. In order to centralize the administration, GlassFish Docker image is using SSH for communication between DAS and other GlassFish worker nodes. Now, let's describe how the Docker images were prepared.
Docker Images
For this demo, we'll use two Docker images:
- one GlassFish Docker image ready to form centralized clusters, hosted in this repository;
- one HAProxy Docker image, provided by Jelastic, to work as a load balancer.
The same GlassFish Docker image can result in containers performing both DAS and cluster node roles. What makes it possible are several customizations made in the Docker image, originally created by Bruno Borges from his GlassFish 4.1.1 image. So, we customized the images to move away from Oracle Linux to Debian, add the OpenSSH install step, and define several configurations in the image provisioning and start up process.
The containers need to communicate between them through SSH, so installing an SSH server is mandatory (we’ve used OpenSSH in this case). Additionally, the entries PubkeyAuthentication, StrictModes, AuthorizedKeysFile, PermitRootLogin, and IdentityFile in /etc/ssh/sshd_config had to be properly set. Moreover, the SSH keys are scanned during startup process to avoid SSH getting stuck due to key's fingerprint acceptance, by ssh-keyscan doing it for us. Once the SSH was properly configured, we are able to automate cluster configuration.
Cluster configuration in GlassFish 4.1.1 is not a trivial task to automate, requiring some level of expertise. This Docker image can work as a DAS by setting -e 'DAS=true' in the Docker run command. If a container is run with this parameter set, it will start the domain, create a cluster named cluster1, stop the domain, and start it again with -v parameter. If a GlassFish container has a DAS container linked, it assumes itself as a cluster node, create a cluster node by using asadmin create-local-instance command, stops the domain, update the node definitions by using the command asadmin update-node-ssh from the DAS node in order to DAS node convert the callee into a SSH cluster node, and finally start the instance by using nadmin start-instance command. For better understanding, please read the run.sh source code in the GlassFish Docker image repository.
Jelastic offers HAProxy Docker image ready to be used in Jelastic environments. Containers from jelastic/haproxy-managed-lb Docker image can add or remove nodes from the load balancer configuration by running the shell script /root/lb_manager.sh inside the container, with one of the following parameters:
- /root/lb_manager.sh --addhosts [Container LAN IP]
- /root/lb_manager.sh --removehost [Container LAN IP]
Running GlassFish Cluster
For this demo, we used Jelastic as a Docker infrastructure and we created a CloudScript of this demo, in this repository. We also used the clusterjsp.ear application to test whether the cluster and the load balancer were working as expected.
Before using the JSON file from https://github.com/jelastic-jps/glassfish-cluster, let's check what exactly this environment should start for us. In our case, we compose this CloudScript file, also called JPS or Jelastic Packaging Standard, that describes the topology that Jelastic must create, what should be installed, what should be started, and the responses to the events triggered by Jelastic administration – such as scale out and scale in. In the topology section, we see three GlassFish nodes, each one in its own group, and an HAProxy node. After this section, there is an onInstall JavaScript object, calling some actions. Lastly, there is a procedures object, which defines some routines to run in the installation process, and the in the CloudScript uses shell script to add GlassFish nodes to the cluster.
In this CloudScript, we can notice that there are two events to update the load balancer’s configuration – onAfterScaleOut and onAfterScaleIn – and one event to update the cluster – onBeforeScaleIn –. In the case of onAfterScaleOut event, a procedure is called to add the nodes to the load balancer, using the script inside HAProxy container. The event onAfterScaleIn just calls the /root/lb_manager.sh directly inside a for loop, and passing all the nodes from event.response.nodes. In case of onBeforeScaleIn event, a procedure is called to remove GlassFish nodes from the cluster.
Now, as you have a better understanding of the proposed JSON file, let’s import it in Jelastic to create an environment. You can access the GlassFish DAS administration console by getting the HTTP URL for it – you can obtain it by clicking on 'Open in browser' button at the right of das node name – and accessing it using HTTPS and in the port 4848. To test whether the load balancer works, deploy the clusterjsp application in cluster1, and set the Availability option. Accessing the load balancer container URL, you may see a GlassFish web page indicating that it is running. Set the context of the deployed application and you'll access the application. After one or two page reloads, you'll see that the node accessed varies.
Create Clustered Environment
You can import the environment by following the steps below:
1. Access Jelastic console and click Import.
2. Select the URL tab and set this URL from the cluster JSON.
3. Set the name for the environment and click Install.
4. Finally, as we can see, the cluster was created.
Deploy and Configure Application
After creating the cluster, we can deploy an application to the cluster and assert that the cluster is working correctly and the load balancer is operating as it should.
1. At GlassFish das node row, click the button Open in browser located at the right of node's name.
2. Once the browser window is open, it shows the default GlassFish web page. Change the URL browser to https://<environment_domain>:4848
3. Enter the GlassFish console with the user admin and password glassfish.
4. Go to the Application option and deploy clusterjsp.ear application:
- Download clusterjsp.ear and choose it as Packaged File to Be Uploaded to the Server.
- Check to have the Availability enabled.
- Finally, set up cluster1 as the application target and click OK to proceed.
5. Now open the HAproxy node in browser and add /clusterjsp at the end of the URL.
Every time you refresh the page, the executed server IP address changes, indicating that the load balancer is working.
Conclusion and Future Works
Docker images and scripting do together a great job at automating environment creation. Docker containers from images, that run in Jelastic the same way they run in any other infrastructure, and scripting, to capture the correct topology of the servers involved in the environment, are the bread and butter of any Docker environment that can be used in Jelastic infrastructure, benefiting from its advantages in terms of resource management, and being able to migrate from other platforms to Jelastic and migrate away if needed.
Sign up now for free to configure your own GlassFish cluster at any of our partners’ platforms within Jelastic Cloud Union.