Cybersecurity
DevOps Cloud
IT Operations Cloud
Table of Contents
Aim of this AppNote
BCC overview
Set Up Details
Installation Steps:
1. Install NCS
2. Install BCC in all the nodes of all the clusters
3. Install IDM and create/configure bcc drivers
4. Configure the clusters for BCC
5. Verify that your BCC is working by doing BCC migration
6. Add third cluster into the BCC
This AppNote aims to provide all the steps required to set up a BCC using a demo setup. This also gives a demo on how to migrate one resource from one cluster to another cluster in BCC. It also shows how to add a new cluster in the existing BCC setup.
However, configuring the Mirrored storage between the clusters is not covered as it vendor specific and left to the user. Instead we create a iSCSI target which is shared/visible to all the clusters which will serve our requirement.
Novell Business Continuity Clustering (BCC) offers corporations the ability to maintain mission critical (24x7x365) data and application services to their users while still being able to perform maintenance and upgrades on their systems.
Novell BCC is a cluster of Novell Cluster Services (NCS) clusters where cluster maintenance, and synchronization have been automated and allows a whole site to do fan-out failover to other multiple sites. BCC uses eDirectory and policy-based management of the resources and storage systems.
Novell BCC software provides the following advantages:
Setting up of BCC involves the followings steps:
Each step, except Configuring Mirrored Storage, is defined below in detail. For this demo, I am not using Mirrored storage as such. Refer to the Set Up Details section below for more detail.
Make sure that you read the section “Set Up Details” before you start the installation.
Requirements for BCC 1.2 Beta Test Environments:
Make sure that OES2 servers you are planning to use for BCC setup meet the following requirements:
eDirectory Structure:
eDirectory structure plays an important role for BCC to function properly. If you do not follow the following points, you may end up with many unpredictable behaviors.
Fig. eDirectory structure for demo BCC set-up
eDirectory structure used for our BCC set up is shown above. It has three OUs
cluster1, cluster2, cluster3 for three clusters and each of them are separate eDirectory partition. In the first partition, cluster1 created for cluster1, cluster1 landing zone (cluster1LandingZone), server objects (wgp-dt82,wgp-dt83),the two nodes of cluster1, cluster1 object(cluster1) and cluster1’s BCC IDM driver set (cluster1Drivers) reside. Similar is the case for cluster2’s partition cluster2 and cluster3’s partition, cluster3.</p.
IDM (Identity Manager ) and its requirements:
BCC 1.2 requires IDM 3.6 or later to run on one node in each of the clusters that belong to the BCC in order to properly synchronize and manage your BCC.
Make sure that the node where IDM will be installed has eDirectory full replica with at least read/write access to all eDirectory objects that will be synchronized between clusters.
Components Locations:
The above diagram also shows BCC component locations –in which server which software is installed. For example,wgp-dt82 -(32bit) NCS, BCC, IDM: In this server NCS, BCC and IDM is installed and wgp-dt83-NCS, BCC:In this server NCS and IDM install and no IDM is installed. Similarly for other servers/nodes. So, from hereon, lets refer to these nodes (wgp-dt82, wgp-dt84, wgp-dt89), where IDM will be installed as “IDM nodes”
Servers set up:
As mentioned above, this set up has 6 OES2 SP1 Linux servers (wgp-dt81, wgp-dt82, wgp-dt83, wgp-dt84, wgp-dt86) and are grouped to form the three clusters cluster1, cluster2, cluster3. Below is how our BCC setup looks like.
cluster1: wgp-dt82 ,wgp-dt83
cluster2: wgp-dt84,wgp-dt86
cluster3: wgp-dt89
iSCSI target :wgp-dt81
Mirror storage set up:
Use whatever method is available to implement the mirrored storage between the clusters. The BCC 1.2 product does not perform data mirroring. You must separately configure either SAN-based mirroring or host-based mirroring.
Choosing and configuring the mirror storage is left to the user as it is vendor specific.
However, for this demo set up we will not be doing mirroring as such. Instead we create an iSCSI target which is shared/visible to all the clusters. Hence, any modification/deletion/edition done on this shared device will be seen and reflected in all the clusters of BCC. This way it removes the necessity of mirroring the storage among clusters. We will use the server wgp-dt81 as our iSCSI target server.
This iSCSI target has 4 raw (unformatted) partitions. Three partitions of size 2GB each are used for NCS cluster specific data and one partition of size 30GB is used as common storage, shared and visible to all the clusters. These partitions are exported as iSCSI targets with the iSCSI identifier mentioned below for easy reference and identifications. From here onwards, these partitions will be referred with the corresponding iSCSI identifier. For example, “cluster1sbd” partition will actually mean the partition, which is created only for cluster 1 and it’s related data’s including sbd partitions.
Network details:
BCC is meant for the clusters, which are geographically separated/across WAN. However for this demo set up we will use the servers in single LAN and single subnet as shown in the above diagram.
Keeping all the notes mentioned above in mind, let us start with installation. First step of BCC installation is to install Novell Cluster Services (NCS). So let us start with this, the NCS installation.
1.1. Prepare for iSCSI target server –Create partitions and export them as iSCSI target.
This section will be different if you are using other methods to implement the mirror storage. You can follow this process only when you use the same method, I am using here to set up BCC.
Let us create the 4 partitions on the iSCSI target server ,wgp-dt81 as mentioned in the Setup Details above. Below are the steps to do this.
Steps:
Once you are done with partition, export those partitions as iSCSI targets with the required identifier name as explained in Setup Details, so that other servers, so called Initiator in iSCSI terminology, can connect to it and use as shared storage.
Below are the steps to do this.
Steps:
This completes the preparation on iSCSI target. This iSCSI target server,wgp-dt81 is now ready to accept the iSCSI connections from the iSCSI initiator, the servers (wgp-dt82, wgp-dt83, wgp-dt84, wgp-dt86, wgp-dt89) which will be forming clusters.
1.2. Prepare the servers, that will be part of cluster, to connect to the corresponding iSCSI targets
After target preparations, all the servers that will be part of clusters need to start iSCSI connection to the iSCSI target. While making iSCSI connections make sure that all the servers are connected to right iSCSI targets as mentioned below.
So, first let us go ahead with the Initiator configurations for the servers, wgp-dt82 and wgp-dt83 which will form cluster1. The same process can be repeated for the servers of other clusters making sure that they connect to the right targets as explained just above.
Lets start with wgp-dt82. Listed below are the steps to do this.
Steps:
This completes iSCSI connection to corresponding iSCSI targets for wgp-dt82 server, the server which will be a part of cluster1.
Repeat this step, step 1.2 for the server wgp-dt83, which will be another member of same cluster, cluster1.
Repeat the step, step 1.2 for all other servers which will be members of other clusters, cluster2 and cluster3. While making iSCSI connections, make sure that all the servers are connected to right iSCSI targets as mentioned above.
This completes the Initiator and Target configuration. Now all the servers are ready for setting up the Novell Cluster Services(NCS) cluster. Let us go ahead with the NCS configuration.
1.3. Configure NCS using Yast2
1.3.1. Initialize the devices that correspond to connected iSCSI targets and make it cluster sharable
Once the iSCSI target are connected, these targets will be visible as a device in nssmu. They need to be initialized and make it cluster sharable before they are used for cluster. Make sure that the device is initialized only once from any server, not from multiple servers, connected to it.
Let’s initialize the devices that corresponds to the iSCSI targets whose identifiers are cluster1sbd and sharedDevice and make the cluster sharable from wgp-dt82. This will be reflected to all the servers connected to this targets and hence it is not needed to be repeated on other servers.
And device corresponds to iSCSI targets whose identifiers are cluster2sbd and cluster3sbd from wgp-dt84 and wgp-dt89 respectively.
Let us start with the devices that corresponds to cluster1sbd and sharedDevice from wgp-dt82. Given below are the steps to do this.
Steps:
At this point we are done with initializing and cluster sharing of the devices that corresponds to iSCSI Target, cluster1sbd and sharedDevice. Now we are ready for cluster setup and configuration. Let us go ahead with it.
1.3.2. Setting up cluster- the NCS configuration.
Let us start with wgp-dt82, which is going to be the first member of cluster1. It is assumed that NCS packages are already installed but not yet configured. Let us do the configuration now. Given below are the steps along with screenshots to do this.
Steps:
Cool, we have done with first node of cluster1. Let us make the second server, wgp-dt83 join to this cluster as second member. To do this follow the steps below.
Cool, we are done with the first cluster, cluster1, set up and configuration and it is running fine.
1.3.3. Follow the same steps, step 1.3.1 –Initialization … and step 1.3.2 -NCS configuration… to setup and configure the remaining clusters, cluster2 and cluster3.
At this point all our clusters, cluster1, cluster2, cluster3 are configured and running. So we are ready to proceed with the BCC Installation.
As mentioned in the “Set up notes of eDirectory ,IDM and BCC components locations”, like NCS, BCC needs to be installed in all the nodes of the clusters which are part of BCC. So let us now start installing this is one server wgp-dt82.For other servers, we can repeat the same process.
2.1. Add the BCC admin user(s), bccadmin ( can be any name)—This is optional. If you wished to use another user to administer the BCC, then go ahead with this step else skip this as we can use same eDirectory admin to manage BCC.
2.2. Create BCC group bccgroup -all lower case (hard coded as of bcc1.2)
Given below are the steps to do this.
Steps:
2.3. Now add the BCC admin user(s) bccadmin, as members of the group-bccgroup.
Given below are the steps to do this.
Steps:
2.4. LUM enable the group,bccgroup and include all the workstations of all the clusters
Given below are the steps to do this.
Steps:
Given below are the steps to do this.
Steps:
2.6. Repeat the same process (step 2.3 : Add the BCC admin user(s), …. ) for all other cluster objects, cluster2, cluster3
2.7. Add the BCC admin user(s), bccadmin in the ncsgroup by editing /etc/groups file.
Below is how it can be done -Log in as the server as root and open the /etc/group file and find either of the following lines:
ncsgroup:!:107:
or
ncsgroup:!:107:bccd
The file should contain one of the above lines, but not both.
Depending on which line you find, edit the line to read as follows:
ncsgroup:!:107:bccadmin,<other users separated by comma if any>
or
ncsgroup:!:107:bccd,bccadmin,<other users separated by comma if any>
Replace bccadmin with the BCC Administrator user you created.
Notice the group ID number of the ncsgroup. In this example, the number 107 is used. This number can be different for each cluster node.
Let us start with wgp-dt82, the 1st node of cluster1. Let us put bccadmin and admin in the ncsgroup. Below is how it can be done.
Save the /etc/group file.
Execute the id <bcc admin user name> and verify that ncsgroup should appear as a secondary group of the BCC Administrator user(s).
2.8. Repeat step 2.7 in all the other servers of all clusters.
2.9. Download the BCC software and install the packages
Now let us download the BCC software in all the servers. Let us start with wgp-dt82.
2.10. Configure BCC in all the nodes of the clusters
Now we can start BCC configuration. Let us start with wgp-dt82 of cluster1.
2.11. Repeat steps 2.9 to 2.10 in all the nodes of all the clusters.
3.1. Install IDM on one node (32 bit server) in each clusters
We need to install IDM on all the IDM nodes (wgp-dt82,wgp-dt84,wgp-dt89).
To start with let us create BCC IDM driver for cluster1 in wgp-dt82.
Steps are given below:
3.2. Repeat step 3.1 for the other two IDM nodes of the other two clusters cluster2 (wgp-dt84) and cluster3 (wgp-dt89)
3.3. Configure/Create BCC IDM drivers
Let us 1st configure BCC for two clusters (cluster1 and cluster2) and then join the 3rd cluster, cluster3 in it. This is because, BCC driver creation and configuration required some special care specially when we deal with three or more clusters. From this point let us keep the third cluster cluster3’s BCC IDM driver configuration aside for a while. We will come back at the step 6 for the same.
3.3.1. Create/Configure BCC driver for cluster1 to sync with cluster2
Login to the iManager of one of the IDM node where IDM plug-ins are installed to do this. Let us use wgp-dt82 ‘s iManager.
Driver name: cluster1toCluster2BCCdriver
This name can be given anything but should be unique. I’ve given the driver name as.
Name of SSL Certificate: SSL CertificateDNS.
This certificate can be seen as follows-click on View Object and click on Organization Object, in the right side pane we can see the certificate object with this name.
DNS name of other IDM node: 164.99.103.84
IP address of the IDM server with which this driver will synchronize with. This is IDM node,wgp-dt84 of cluster2 in our BCC set up.
Port number for this driver: 2002
If you have a business continuity cluster that consists of three or four clusters, you must specify unique port numbers for each driver template set. The default port number is 2002. I have left the port number to default value-2002
Full Distinguished Name (DN) of the cluster this driver services: cluster1.cluster1.bcc
Specify or just browse using object selector button and select the current cluster,cluster1 here.
Fully Distinguished Name (DN) of the landing zone container: cluster1LandingZone .cluster1.bcc
This is the container where the cluster pool and volume objects in the other cluster are placed when they are synchronized to this cluster. The NCP server objects for the virtual server of a BCC enabled resource are also placed in the landing zone. I had already created a container called cluster1LandingZone-refer the eDirectory structure.
I have selected this container for landing zone for cluster1.
3.3.2. Create/Configure BCC driver for cluster2 to sync with cluster1
3.3.3. Configure firewall to allow the BCC driver ports-if firewall is enabled.
Make sure that the BCC driver ports are allowed in the firewall if firewall is enabled on the IDM nodes. Let us start with the IDM node,wgp-dt82 of cluster1. Follow the below steps to do this.
Steps:
3.4. Upgrade the drivers to use new enhanced IDM architecture and start the drivers.
Once you have created drivers for each cluster, you must upgrade them to the IDM 3.6 format ,the enhanced architecture.
Follow the below steps to do this.
4.1. Enable BCC on each clusters
4.2. Adding Cluster Peer Credentials
In order for one cluster to connect to a second cluster, the first cluster must be able to authenticate to the second cluster. To make this possible, you must add the username and password of the user that the selected cluster will use to connect to the selected peer cluster.
This can be done only through command line as of BCC1.2 -Bug on BCC1.2.
Below is how we can do.
At the terminal console prompt, enter “cluster connections”.
[You should see both the clusters,cluster1 and cluster2. If all the clusters are not present, then either the IDM drivers are not synchronized or BCC is not properly enabled on the clusters. If synchronization is in progress, wait for it to complete, then try cluster connections again.]
For each cluster in the list, type “cluster credentials <cluster name> “ at the server console prompt, then enter the BCC admin username bccadmin or admin and password when prompted. I have used “admin” here. This is shown below.
4.3. Verify the Cluster connections
Now you should see the connections status are fine as shown below.
Repeat steps 4.2 and 4.3 in all the nodes of all the clusters and make sure that all the clusters listed in the cluster connections command and connection status is OK.
4.4. BCC enable the cluster resources
Create one pool and a volume in it using nssmu. Make sure that you create pool from the shared device /partition.(30GB). Let us do this in server wgp-dt82, a node of cluster1.
Now let us BCC enable the same pool.
Steps:
Steps:
This completes the setting up of two clusters BCC and demo of BCC migration.
If you have three or more clusters in your business continuity cluster, you should set up synchronization drivers in a manner that prevents IDM loops. IDM loops can cause excessive network traffic and slow server communication and performance.
There should not be any direct synchronization between cluster2 and cluster3 to avoid IDM loop. If you need one more cluster to be added, you can configure the driver to sync with any of the driver as long as loop is not formed.
6.1. Create one more BCC driver, cluster1tocluster3BCCdriver to sync with new cluster, cluster3, in cluster1’s existing driver set, cluster1Drivers.
Now you will see this new driver, cluster1toCluster3BCCdriver in Driver Set Overview page.
6.2. Create one more driver set,cluster3Drivers for new cluster,cluster3 with a driver, cluster3tocluster1BCCdriver in it to sync with cluster1
Repeat step 3.3.1 Make sure that the following values are entered. Values other than the ones mentioned below are same for all the driver configurations.
Driver name: cluster3tocluster1BCCdriver
Name of the SSL certificate: SSL CertificateDNS
DNS name of the other IDM node: 164.99.103.82
(the IP address of wgp-dt82, the IDM installed node of cluster1,
with which this driver will sync with )
Port number of this driver: 2003
Full Distinguished Name (DN) of the cluster this driver services: cluster3.cluster3.bcc
(the DN of this new cluster )
Fully Distinguished Name (DN) of the landing zone container: cluster3LandingZone.cluster3.bcc
(the landing zone for this new cluster)
6.3. Configure firewall to allow the ports for the new BCC driver
6.4. Now migrate and start the new drivers
cluster1tocluster3BCCdriver and cluster3tocluster1BCCdriver
6.5. Setup cluster credentials
6.5.1. Setup cluster credentials from all the nodes of the old clusters, cluster1 and cluster2 to cluster3
Shown for wgp-dt82, one node of cluster1.
Make sure that you get the connection status OK. At this point, cluster connections are fine.
Repeat this for all other nodes of the clusters,cluster1 and cluster2 and make sure cluster connections are fine as above.
6.5.2. From all the nodes of the new cluster,cluster3 to all clusters, cluster1, cluster2, cluster3
Now in all the nodes of new cluster, cluster3 do the following tasks - setting up the credentials using cluster credentials and verification of the cluster connections using ‘cluster connections’ command.
In this setup I have only one server, wgp-dt89 in cluster3. So I have done this only in this node as shown below.
6.6. Synchronizing Identity Manager Drivers
If you are adding a new cluster to an existing business continuity cluster, you must synchronize the BCC-specific IDM drivers after you have created the BCC-specific IDM drivers. If the BCC-specific Identity Manager drivers are not synchronized, clusters cannot be used for BCC migration.
Synchronizing the IDM drivers is not necessary unless you are adding a new cluster to an existing BCC.
To synchronize the BCC-specific Identity Manager drivers follow the following steps:
6.7. Make cluster 3 in assigned list of the existing pools.
Steps:
6.8. Verify the BCC setup by migrating pools from cluster1 to new cluster, cluster3.
At this point we are done with the BCC set up of three clusters and migration of BCC enabled cluster resources is also working. Hereafter any pool created and BCC enabled should be able to migrate from any cluster to any other cluster.