Tuesday, February 26, 2013

GlassFish Clustering Setup


1.  Detailed steps to create the domain, cluster and clustered instances in Glassfish


It is assumed that one cluster is created across two servers: un128desb1, un128desb2. Within the cluster there are two instances that will reside in un128desb1 and un128desb2 respectively.

asadamin commands must be executed in the user name of jcaps.


1.1 Steps to create domain with cluster profile and start the domain on the server un128desb1

1) Execute the asadmin command: create-domain on un128desb1

/opt/JCAPS62/appserver/bin/asadmin create-domain \
--portbase $PORTBASE \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
--profile cluster \
$DOMAIN_NAME

where
PORTBASE is base port number for this domain
ADMIN_USER_NAME is the admin user name for this domain.
PASSWORD_FILE is the name of the password file which contains the admin password for this domain.
DOMAIN_NAME is the name of domain to be created.


2) Execute the asadmin command: start-domain on un128desb1

/opt/JCAPS62/appserver/bin/asadmin start-domain \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
$DOMAIN_NAME

where ADMIN_USER_NAME is the admin user name for this domain.
PASSWORD_FILE is the name of the password file which contains the admin password for this domain.
DOMAIN_NAME is the name of domain to be created.

1.2 Steps to create node agent in one machine that has DAS

/opt/JCAPS62/appserver/bin/asadmin create-node-agent \
--host $HOST_NAME \
--port $ADMIN_PORT \
$NODE_NAME

where
HOST_NAME is the machine name where the new node agent will reside. Here it is un128desb1. ADMIN_PORT is the admin port number. It is PORTBASE+48.
NODE_NAME is the name of created node agent.

2) After the command is executed successfully a new folder node1 is created /opt/JCAPS62/appserver/nodeagents/node1 on machine un128desb1.
node 1 is the name of node agent created.
 Also new created node agent is added into /opt/JCAPS62/appserver/domains/dmlusterUtilities/config/domain.xml in un128desb1 where DAS resides.


<node-agents>
<node-agent name="node1" ......>
</node-agent>
</node-agents>


After the node agent is created this node agent is not displayed on Admin console page until the node agent is started.

3) Execute the asadmin command: start-node-agent
/opt/JCAPS62/appserver/bin/asadmin start-node-agent \
--user $ADMIN_USER_NAME \
--password $PASSWORD_FILE \
$NODE_NAME

where NODE_NAME is the name of node that is to be started.


1.3 Steps to create domain with cluster profile and start the domain on the server un128desb2

Creating one domain with cluster profile on un128desb2 is the same as on un128desb1. The command is executed on un128desb2


1.4 Steps to create node agent talked to DAS in remote machine on the server un128desb2

1) Execute the asadmin command: create-node-agent on un128desb2 (un128desb2 is the another machine where the created node agent will reside in)


/opt/JCAPS62/appserver/bin/asadmin create-node-agent \
--host $HOST_NAME \
--port $ADMIN_PORT \
--user $ADMIN_USER_NAME \
--password $PASSWORD_FILE \
$NODE_NAME

where
HOST_NAME is the machine that hosts DAS. Here it is un128desb1
ADMIN_PORT is the admin port for this host.
ADMIN_USER_NAME and PASSWORDFILE are the admin user name and admin password for the server where the node agent is created. Here it is un128desb2.

2) After the command is executed successfully A new folder is created /opt/JCAPS62/appserver/nodeagents/node2 on machine un128desb2.   node 2 is the name of the node agent created.
Also new created node agent is added into /opt/JCAPS62/appserver/domains/dmlusterUtilities/config/domain.xml in un128desb1 where DAS resides.

Note it is domain.xml on un128desb1 not un128desb2.


<node-agents>
<node-agent name="node2" ....>
</node-agent>
</node-agents>
After the node agent is created this node agent is not displayed on Admin console page until the node agent is started.

3) Execute the asadmin command: start-node-agent on un128desb2
/opt/JCAPS62/appserver/bin/asadmin start-node-agent \
--user $ADMIN_USER_NAME \
--password $PASSWORD_FILE \
$NODE_NAME

where ADMIN_USER_NAME, PASSWORD_FILE are the admin user name and the name of the password file for the host where DAS resides.
Here the host is un128desb2.

After the command is executed successfully this node agent will be displayed on Admin Console on the host where DAS resides. Here it is un128desb1.



Figure2: Node agents from the AdmonConsole

Note: There is no node agent displayed on Admin Console on the host un128desb2 since the created node agent: node5 is associated to DAS on un128desb1.


1.5 Steps to create cluster on un128desb1

1) Execute the asadmin command: create-cluster on the server un128desb1

/opt/JCAPS62/appserver/bin/asadmin create-cluster \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
--host $HOST_NAME \
--port $ADMIN_PORT \
$CLUSTER_NAME

where
HOST_NAME and ADMIN_PORT are the host name and admin port number for the host where cluster resides respectively. Here it is un128desb1.
ADMIN_USER_NAME and PASSWORDFILE are admin user and password file name for the host. CLUSTER_NAME is the name of the cluster that is to be created.

2) After the command is executed successfully one new cluster is displayed in Admin Console on un128desb1.

Note: Since no configuration is specified when creating the cluster the new configuration called CLUSTER_NAME-config is created and is used by the created cluster

Figure3: Clusters from AdminConsole


1.6 Steps to create cluster instances on un128desb1

1) Execute the asadmin command: create-instance on un128desb1 to create instance 1 which resides on the machine where node agent node1 resides
/opt/JCAPS62/appserver/bin/asadmin create-instance \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
--host $HOST_NAME \
--port $ADMIN_PORT \
--nodeagent $NODE1_NAME \
--cluster $CLUSTER_NAME \
$INSTANCE1_NAME

where
HOST_NAME and ADMIN_PORT are the host name and admin port number.
ADMIN_USER_NAME and PASSWORD_FILE are the user name and password file for the host NODE1_NAME is the node agent that is used to manage the created instance.
CLUSTER_NAME is the name of cluster to which the created instance belongs.
INSTANCE1_NAME is the name of created instance.

2) After the successful execution of this command a new folder is created /opt/JCAPS62/appserver/nodeagents/node1/instance1 on machine un128desb1.
From Admin Console on un128desb1 shows the new created cluster instance.



Figure4: New created cluster instance from AdminConsole

1.7 Steps to create cluster instances on un128desb2


1) Execute the asadmin command: create-instance on un128desb1 to create instance 2 which resides on the remote machine: un128desb2

Note: This command is executed on machine: un128desb1 rather than un128desb2. But the instance2 created resides on un128desb2


/opt/JCAPS62/appserver/bin/asadmin create-instance \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
--host $HOST_NAME \
--port $ADMIN_PORT \
--nodeagent $NODE2_NAME \
--cluster $CLUSTER_NAME \
$INSTANCE2_NAME

where
HOST_NAME and ADMIN_PORT are the host name and admin port number.
ADMIN_USER_NAME and PASSWORD_FILR are the user name and password file for the host. NODE2_NAME is the node agent that is used to manage the created instance. Here NODE2_NAME resides on un128desb2.
CLUSTER_NAME is the cluster to which the created instance belong to.
INSTANCE2_NAME is the name of created instance.

2) After the successful execution of this command a new folder is created /opt/JCAPS62/appserver/nodeagents/node5/instance2 on machine un128desb2.
From Admin Console on un128desb1 shows the new created cluster instance.


Figure5: New Created Cluster Instance from AdminConsole

1.8 Steps to start the cluster on un128desb1

1) Execute the asadmin command: start-cluster on un128desb1 to start the cluster


/opt/JCAPS62/appserver/bin/asadmin start-cluster \
--host $HOST_NAME \
--port $ADMIN_PORT \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
$CLUSTER_NAME

2) This command will take some time to finish. After the successful execution of this command the cluster and the instances within the cluster will be started.
From Admin Console on un128desb1 shows the new created cluster instance.

Figure6: Started cluster from AdminConsole
Figure7: Cluster instances within the cluster


1.9 Steps to delete the cluster instance

Instances can be deleted from AdminConsole or from asadmin command.

1) Execute the asadmin command: delete-instance on un128desb1 to start the cluster
/opt/JCAPS62/appserver/bin/asadmin delete-instance \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
--host $HOST_NAME \
--port $ADMIN_PORT \
$INSTANCE_NAME \

Where
HOST_NAME and ADMIN_PORT are the host name and admin port number of the domain where the cluster resides.
ADMIN_USER_NAME and PASSWORD_FILE are the user name and password file for the domain. INSTANCE_NAME is the name of the instance to be deleted.

2) Select the cluster instances to be deleted and then click on the Delete button to delete the selected instances


Figure8 Selecting the cluster instance to be deleted from AdminConsole

1.10 Steps to delete the node agents

Node agents can be deleted from AdminConsole or from asadmin command.
1) Execute the asadmin command: delete-nodeagent on un128desb1 to start the cluster


/opt/JCAPS62/appserver/bin/asadmin delete-nodeagent \
$NODEAGENT_NAME

where
NODEAGENT_NAME is the name of node agent that is to be deleted.

2) Select the node agent to be deleted and then click on the Delete button to delete the selected node agent.


Figure9 Selecting the node agent to be deleted from AdminConsole

2 Create resources in a cluster

To create resources in a cluster is the same as to a standalone server. The only difference is to add one cluster from Available Targets as one of Selected Targets.



Figure10: Selecting targets for deployed resources from AdminConsole

3 Deploy the applications to a cluster

To deploy the applications to a cluster is the same as to a standalone server. The only difference is to add one cluster as one of Selected Targets

Figure11: Selecting targets for deployed applications from AdminConsole


Note: When creating Oracle JDBC resources make sure the Oracle JDBC driver jar: ojdbc5.jar is in both GlassFish installation on the server machines.

In the example ojdbc5.jar are located in the folder: /opt/JCAPS62/appserver/lib on the machines: un128desb1 and un128desb2.


4 Deploy ESB to a cluster

To deploy ESB (service assemblies, BC, service engines or shared libraries) to a cluster is the same as to a standalone server. The only
difference is to add one cluster from Available Targets as one of Selected Targets.

5 Operation steps in a cluster

5.1 Start and stop the node agents

In order to start the cluster and cluster instances the node agents associated with the cluster must be started first.
Whether the instances are started when the node agent is started depends on the setting for the node agent.
There is one checkbox in AdminConsole for each node: Start Instances On Startup. If it is checked when the node agent is started all instances associated with it will be started as well.


Figure12: Node agent status from AdminConsole

However when the node agent is stopped the instances associated with this node agent will be stopped as well.
At the moment it seems there is no way to start the node agents from Admin Console. Node agents can be started by using the asadmin command:

/opt/JCAPS62/appserver/bin/asadmin start-node-agent \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
$NODE_NAME

where
ADMIN_USER_NAME and PASSWORD_FILR are the user name and password file for the host. NODE_NAME is the node agent to be started.


Figure13: Node agents from AdminConsole
Likewise node agents can be stopped by using the asadmin command:
/opt/JCAPS62/appserver/bin/asadmin stop-node-agent \
$NODE_NAME

where
NODE_NAME is the node agent to be stopped.



Figure14: Node agent status from AdminConsole
Note: start-node-agent command and stop-node-agent are executed on the machine where the node agent is associated with the host name. For example node 2 is associated with the host un128desb1 and node3 with the host un128desb2. In order to start and stop node2 the asadmin commands are executed on un128desb1 and to start and stop node3 the asadmin commands are executed on un128desb2.


5.2 Start and stop the cluster

After the node agents are started the cluster can be started and stopped. Whenever the cluster is started or stopped all instances within the cluster will be started or stoped.
The cluster can be started and stopped from either AdminConsole or asadmin commands.
From the Clusters page in AdminConsole select the cluster to be started or stopped then click on Start Cluster button if it is to start cluster or Stop Cluster button if it is to stop the cluster.


Figure15: Start and stop of cluster from AdminConsole

The cluster can be started by using the asadmin command:


/opt/JCAPS62/appserver/bin/asadmin start-cluster \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
--host $HOST_NAME \
--port $ADMIN_PORT \
$CLUSTER_NAME

Where
HOST_NAME and ADMIN_PORT are the host name and admin port number of the domain where the cluster resides.
ADMIN_USER_NAME and PASSWORD_FILE are the user name and password file for the domain. CLUSTER_NAME is the name of the instance to be stopped.

5.3 Start and stop the cluster instances 

Each cluster instances can be started and stopped individually from either Admin Console or asadmin command.


Figure16: Start and stop of one cluster instance from AdminConsole
The cluster instance can be stopped by using the asadmin command:


/opt/JCAPS62/appserver/bin/asadmin stop-instance \
--host $HOST_NAME \
--port $ADMIN_PORT \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
$INSTANCE_NAME

where
HOST_NAME and ADMIN_PORT are the host name and admin port number of the domain where the instance resides.
ADMIN_USER_NAME and PASSWORD_FILE are the user name and password file for the domain. INSTANCE_NAME is the name of the instance to be started.

The cluster instance can be stopped by using the asadmin command:


/opt/JCAPS62/appserver/bin/asadmin start-instance \
--host $HOST_NAME \
--port $ADMIN_PORT \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
$INSTANCE_NAME

where
HOST_NAME and ADMIN_PORT are the host name and admin port number of the domain where the instance resides.
ADMIN_USER_NAME and PASSWORD_FILE are the user name and password file for the domain.

1 comment:

  1. Excellent guide on setting up GlassFish clustering! If you're working with containerized environments, Openssh Windows is a powerful tool for managing your local Docker containers, making it easy to test clustering setups and ensure smooth deployments in isolated environments.

    ReplyDelete