Wikis - Page

Setting Up a Business Continuity Cluster (BCC) in a Single eDirectory Tree Using OES 2 Linux Servers

0 Likes

Table of Contents



Aim of this AppNote

BCC overview

Set Up Details

Installation Steps:

     1. Install NCS

     2. Install BCC in all the nodes of all the clusters

     3. Install IDM and create/configure bcc drivers

     4. Configure the clusters for BCC

     5. Verify that your BCC is working by doing BCC migration

     6. Add third cluster into the BCC



Aim of this AppNote


This AppNote aims to provide all the steps required to set up a BCC using a demo setup. This also gives a demo on how to migrate one resource from one cluster to another cluster in BCC. It also shows how to add a new cluster in the existing BCC setup.
However, configuring the Mirrored storage between the clusters is not covered as it vendor specific and left to the user. Instead we create a iSCSI target which is shared/visible to all the clusters which will serve our requirement.



BCC overview



NovellRegistered Business Continuity Clustering (BCC) offers corporations the ability to maintain mission critical (24x7x365) data and application services to their users while still being able to perform maintenance and upgrades on their systems.



Novell BCC is a cluster of Novell Cluster Services (NCS) clusters where cluster maintenance, and synchronization have been automated and allows a whole site to do fan-out failover to other multiple sites. BCC uses eDirectory and policy-based management of the resources and storage systems.



Novell BCC software provides the following advantages:




  • Integrates with SAN hardware devices to automate the failover process using standards based mechanisms such as SMI-S.

  • Utilizes Novell Identity Manager technology to automatically synchronize and transfer cluster related eDirectory objects from one cluster to another.

  • Provides the capability to fail over as few as one cluster resource, or as many as all cluster resources.



Setting up of BCC involves the followings steps:




  • Configuring NCS

  • Configuring Mirrored Storage

  • Installing the BCC 1.2 Beta 3 Software on Every Node in Each Cluster

  • Installing and configuring Identity Manager 3.6 on One Node of Each Cluster

  • Synchronizing the BCC IDM Drivers (for new cluster in BCC)

  • Configuring the Clusters for BCC

  • BCC-Enabling Cluster Resources



Each step, except Configuring Mirrored Storage, is defined below in detail. For this demo, I am not using Mirrored storage as such. Refer to the Set Up Details section below for more detail.



Make sure that you read the section “Set Up Details” before you start the installation.



Requirements for BCC 1.2 Beta Test Environments:


Make sure that OES2 servers you are planning to use for BCC setup meet the following requirements:



  • SUSERegistered Linux Enterprise Server (SLES) 10 SP2 (shipping version)

  • OES 2 SP1 Beta 2 Linux with the options necessary for Novell Cluster Services installed.



Set Up Details



eDirectory Structure:



eDirectory structure plays an important role for BCC to function properly. If you do not follow the following points, you may end up with many unpredictable behaviors.




Click to view.



Fig. eDirectory structure for demo BCC set-up





  • Make sure that each cluster resides in it's own OU. Each OU should reside in a different eDirectory partition.

  • Best practice is to put all the server objects, cluster object, driver objects and Landing Zone that belong to single cluster , into a single eDirectory partition as shown above.



eDirectory structure used for our BCC set up is shown above. It has three OUs
cluster1, cluster2, cluster3 for three clusters and each of them are separate eDirectory partition. In the first partition, cluster1 created for cluster1, cluster1 landing zone (cluster1LandingZone), server objects (wgp-dt82,wgp-dt83),the two nodes of cluster1, cluster1 object(cluster1) and cluster1’s BCC IDM driver set (cluster1Drivers) reside. Similar is the case for cluster2’s partition cluster2 and cluster3’s partition, cluster3.</p.

IDM (Identity Manager ) and its requirements:



BCC 1.2 requires IDM 3.6 or later to run on one node in each of the clusters that belong to the BCC in order to properly synchronize and manage your BCC.



Make sure that the node where IDM will be installed has eDirectory full replica with at least read/write access to all eDirectory objects that will be synchronized between clusters.




  • Make sure that at least one of the nodes in a cluster is 32 bit OES2 SP1 Linux OS for IDM installation.

  • Make sure that the IDM node has eDirectory Read/Write access on the corresponding partition. In the above diagram, wgp-dt82 is 32 bit and has Read/Write access for the partition,cluster1. Similar is the case for wgp-dt84,wgp-dt89 for cluster2 and cluster3 partitions respectively.



Components Locations:



The above diagram also shows BCC component locations –in which server which software is installed. For example,wgp-dt82 -(32bit) NCS, BCC, IDM: In this server NCS, BCC and IDM is installed and wgp-dt83-NCS, BCC:In this server NCS and IDM install and no IDM is installed. Similarly for other servers/nodes. So, from hereon, lets refer to these nodes (wgp-dt82, wgp-dt84, wgp-dt89), where IDM will be installed as “IDM nodes”



Servers set up:



As mentioned above, this set up has 6 OES2 SP1 Linux servers (wgp-dt81, wgp-dt82, wgp-dt83, wgp-dt84, wgp-dt86) and are grouped to form the three clusters cluster1, cluster2, cluster3. Below is how our BCC setup looks like.




Click to view.



cluster1: wgp-dt82 ,wgp-dt83

cluster2: wgp-dt84,wgp-dt86

cluster3: wgp-dt89

iSCSI target :wgp-dt81



Mirror storage set up:



Use whatever method is available to implement the mirrored storage between the clusters. The BCC 1.2 product does not perform data mirroring. You must separately configure either SAN-based mirroring or host-based mirroring.



Choosing and configuring the mirror storage is left to the user as it is vendor specific.



However, for this demo set up we will not be doing mirroring as such. Instead we create an iSCSI target which is shared/visible to all the clusters. Hence, any modification/deletion/edition done on this shared device will be seen and reflected in all the clusters of BCC. This way it removes the necessity of mirroring the storage among clusters. We will use the server wgp-dt81 as our iSCSI target server.



This iSCSI target has 4 raw (unformatted) partitions. Three partitions of size 2GB each are used for NCS cluster specific data and one partition of size 30GB is used as common storage, shared and visible to all the clusters. These partitions are exported as iSCSI targets with the iSCSI identifier mentioned below for easy reference and identifications. From here onwards, these partitions will be referred with the corresponding iSCSI identifier. For example, “cluster1sbd” partition will actually mean the partition, which is created only for cluster 1 and it’s related data’s including sbd partitions.




  1. cluster1sbd: This partition is meant to be used only for cluster1 and sbd partition for cluster1 is created on this partition. Hence it will be cluster sharable and connected by all the nodes in cluster1.

  • cluster2sbd: This partition is meant to be used only for cluster2 and sbd partition for cluster2 is created on this partition. Hence it will be cluster sharable and connected by all the nodes in cluster2.

  • cluster3sbd: This partition is meant to be used only for cluster3 and sbd partition for cluster3 is created on this partition. Hence it will be cluster sharable and connected by all the nodes in cluster3.

  • SharedDevice: This is the common storage visible/shared to all the clusters. This basically means that all the nodes of all the clusters will be connecting to it and is cluster sharable.



Network details:



BCC is meant for the clusters, which are geographically separated/across WAN. However for this demo set up we will use the servers in single LAN and single subnet as shown in the above diagram.



Installation Steps:



Keeping all the notes mentioned above in mind, let us start with installation. First step of BCC installation is to install Novell Cluster Services (NCS). So let us start with this, the NCS installation.



1. Install NCS



1.1. Prepare for iSCSI target server –Create partitions and export them as iSCSI target.



This section will be different if you are using other methods to implement the mirror storage. You can follow this process only when you use the same method, I am using here to set up BCC.



Let us create the 4 partitions on the iSCSI target server ,wgp-dt81 as mentioned in the Setup Details above. Below are the steps to do this.



Steps:




  1. Login Click on Computer then Yast2 and search for Partitioner and click Partitioner Icon or type yast2 disk on the terminal.

  • Click Yes for the Pop up Warning message and get the Expert Partitioner wizard.

  • Click on Create button.

  • Select the Device where partitions need to be created and click OK.

  • Select the Primary Partition from Partition Type and click OK.

  • Select Do not format and enter the size in the box named End (2GB entered) and click OK.

  • Now you will get the partition you have created just now in the next page .It is highlighted in the below clip.

  • To create next partition, click the Create button and repeat the same process above till you get the require number of partitions. After creating all the four partitions, my screen looks like the one shown below.



Click to view.



Note that /dev/sda3, /dev/sda4 and /dev/sdb1 of 2GB each and /dev/sdb2 of 30GB are created.



  • Now click Apply to confirm the partition creation.

  • Click Apply on the pop up message to complete the task.

  • To exit the Expert Partitioner wizard click the Quit button.



Once you are done with partition, export those partitions as iSCSI targets with the required identifier name as explained in Setup Details, so that other servers, so called Initiator in iSCSI terminology, can connect to it and use as shared storage.



Below are the steps to do this.



Steps:




  1. Type yast2 iscsi-server in the console terminal of same iSCSI target server, wgp-dt81.

  • Click Continue on the Pop Up message on the Initializing ISCSI Target Configuration page. This will take us to iSCSI Target Overview page.

  • Select “When Booting” in Service Start and click on “Targets” tab on iSCSI Target Overview page.

  • If some target name is already there, select that and click on “Delete” button and click “Continue” on the pop up message to confirm the deletion.

  • Click on the Add button on the same on iSCSI Target Overview page.

  • Modify the Identifier field with a appropriate name for easy identification. Put the Identifier as “cluster1sbd” as explained above and then click Add.

  • Click on the Browse button and select the 2GB partition (in my case it is sda3) and click Open and then OK. Then you will get the page below.


Click to view.


  • Click the Next button. This will take you to Modify iSCSI Target page.

  • Again click the Next button if Authentication is not used . If Authentication is used, Authentication parameters need to configure here. Our BCC set up does not require Authentication, so let us not enable Authentication and leave authentication disabled (default) and hence clicked Next.

  • Again click on the Add button and repeat the same process till you get all the partitions listed as iSCSI target with unique identifier you have given. After adding all the partitions, our page looks like this.


Click to view.


  • Now click Finish.

  • Click Yes on the pop up message to Restart the iscsitarget service? on the Saving iSCSI Target Configuration page.



This completes the preparation on iSCSI target. This iSCSI target server,wgp-dt81 is now ready to accept the iSCSI connections from the iSCSI initiator, the servers (wgp-dt82, wgp-dt83, wgp-dt84, wgp-dt86, wgp-dt89) which will be forming clusters.



1.2. Prepare the servers, that will be part of cluster, to connect to the corresponding iSCSI targets



After target preparations, all the servers that will be part of clusters need to start iSCSI connection to the iSCSI target. While making iSCSI connections make sure that all the servers are connected to right iSCSI targets as mentioned below.




  • All the servers, wgp-dt82, wgp-dt83 which will be members of cluster1 should connect to two targets –1.target with identifier name cluster1sbd and 2. Target with identifier name sharedDevice.

  • Similarly the severs, wgp-dt84 and wgp-dt86 which will be members of cluster2 should connect to two targets – 1.Target with identifier name cluster2sbd and 2. Target with identifier name sharedDevice. Similar is the case for server wgp-dt89 which will be a member of cluster3.



So, first let us go ahead with the Initiator configurations for the servers, wgp-dt82 and wgp-dt83 which will form cluster1. The same process can be repeated for the servers of other clusters making sure that they connect to the right targets as explained just above.



Lets start with wgp-dt82. Listed below are the steps to do this.



Steps:




  1. Type yast2 iscsi-client in the terminal of the server, wgp-dt82. This will take you to Initializing iSCSI Initiator Configuration page.

  • Click Continue on the pop up message to continue installation of open-iscsi package. This will take you to iSCSI Initiator Overview page

  • Select “When booting” Service start and click “Connected Targets” tab on iSCSI Initiator Overview page

  • Click on the Add button to bring up the iSCSI Initiator Discovery page.

  • Type the IP of the iSCSI target wgp-dt81 (164.99.103.81) on IP Address field and click on Next. (If you can not see the exported iSCSI targets in the iSCSI Initiator Discovery page on clicking Next, check the following important note.)



Click to view.



Important Note: Even after iSCSI target configuration, if you do not see the above page at all, firewall setting of the iSCSI target could be the culprit. Quick check will be to disable the firewall in the iSCSI target server. If it is working with firewall disable, then enable the firewall and make sure that “iSCSI Target” service is allowed in the firewall.

You can do this as follows: Login to the target server as a root, bring up firewall configuration wizard by typing “yast2 firewall”, then click on Allowed Services, select the Service to Allow drop down menu and select iSCSI Target from the menu, then click on Add, click on Next and click on Accept button to finish the configuration.


  • Select the iSCSI target with identifier name cluster1sbd and click on Connect and then Next. Now verify that the Connected state to True now.

  • Again select the second iSCSI target with identifier name, sharedDevice then Connect and then Next. Now verify that the Connected state to True now.

  • Now let us connect to the second iSCSI target, sharedDevice. So, select the target with identifier sharedDevice and click Connect and then Next. At this point our iSCSI Initiator Discovery page looks below.



Click to view.



  • At this point, wgp-dt82, the first server of cluster1 have connected to all the required iSCSI targets. We do not need to connect any more iSCSI targets. So lets continue with further iSCSI configuration. Click on Next. This will take you to iSCSI Initiator Discovery page with the tab “Connected Targets” selected.

  • Select the targets one by one and click on Toggle Start-Up button to toggle the Start-Up mode to automatic.

  • Click Finish to exit the iSCSI Initiator Discovery wizard and complete the Initiator set up for wgp-dt82.

  • Now verify that all the connected disks are listed in “lsscsi” command in server terminal.


Click to view.




This completes iSCSI connection to corresponding iSCSI targets for wgp-dt82 server, the server which will be a part of cluster1.



Repeat this step, step 1.2 for the server wgp-dt83, which will be another member of same cluster, cluster1.



Repeat the step, step 1.2 for all other servers which will be members of other clusters, cluster2 and cluster3. While making iSCSI connections, make sure that all the servers are connected to right iSCSI targets as mentioned above.



This completes the Initiator and Target configuration. Now all the servers are ready for setting up the Novell Cluster Services(NCS) cluster. Let us go ahead with the NCS configuration.



1.3. Configure NCS using Yast2


1.3.1. Initialize the devices that correspond to connected iSCSI targets and make it cluster sharable



Once the iSCSI target are connected, these targets will be visible as a device in nssmu. They need to be initialized and make it cluster sharable before they are used for cluster. Make sure that the device is initialized only once from any server, not from multiple servers, connected to it.



Let’s initialize the devices that corresponds to the iSCSI targets whose identifiers are cluster1sbd and sharedDevice and make the cluster sharable from wgp-dt82. This will be reflected to all the servers connected to this targets and hence it is not needed to be repeated on other servers.



And device corresponds to iSCSI targets whose identifiers are cluster2sbd and cluster3sbd from wgp-dt84 and wgp-dt89 respectively.



Let us start with the devices that corresponds to cluster1sbd and sharedDevice from wgp-dt82. Given below are the steps to do this.



Steps:




  1. Login to wgp-dt82 as a root and invoke NSS Management Utility by typing nssmu.

  • Select Devices from the Main Menu and press Enter key.

  • Select the device that corresponds to iSCSI target whose identifier is cluster1sbd (2GB) from the list of devices using up down arrow and then press F3 to Initialize the device then type Y when message pops up to confirm the initialization. Then press F6 to make the device cluster sharable.

  • Select the second device that corresponds to iSCSI target whose identifier is sharedDevice (30GB) and press F3 and then OK on the Initialization confirmation and then press F6.

  • To exit the NSS utility press Esc, Esc multiple times.



At this point we are done with initializing and cluster sharing of the devices that corresponds to iSCSI Target, cluster1sbd and sharedDevice. Now we are ready for cluster setup and configuration. Let us go ahead with it.



1.3.2. Setting up cluster- the NCS configuration.



Let us start with wgp-dt82, which is going to be the first member of cluster1. It is assumed that NCS packages are already installed but not yet configured. Let us do the configuration now. Given below are the steps along with screenshots to do this.



Steps:




  1. Login to wgp-dt82 and type “yast2 ncs” on the terminal to launch NCS configuration wizard.


Click to view.




Click to view.



  • Press Continue on the pop up message.


Click to view.


  • Enter the Admin password and click OK to proceed.


Click to view.


  • Select the New Cluster, check both Directory Server Address, enter the FDN of the cluster ( Make sure to enter the correct context to conform the requirement for the eDirectory partitions mentioned in the eDirectory structures), IP address of the cluster and storage device with shared media, sdc, the device that corresponds to iSCSI target with iSCSI identifier cluster1sbd .Remember this is the device we noted down during 1.3.1 Initialization the device …… , click Next.


Click to view.


  • Click on Finish button to finish the configuration and exit the wizard


  • Verify that cluster is running in this server/node wgp-dt82, the first node of the cluster cluster1:



Click to view.


Cool, we have done with first node of cluster1. Let us make the second server, wgp-dt83 join to this cluster as second member. To do this follow the steps below.




  1. Launch NCS configuration typing yast2 ncs on the 2nd server, wgp-dt83’s terminal. This takes you to the Initializing NCS configuration page.

  • Click Continue on the pop up message in Initializing NCS configuration page to continue the configuration. This takes you to Novell Cluster Services Configuration page.

  • Enter the admin password and click OK to continue. This takes you to the below page.


Click to view.


  • Select Existing Clusters, check both the Directory Server Address, and enter the FDN of the cluster1 and then click Next to proceed.

  • Click Finish to save the configurations and setting and exit the NCS configuration wizard.

  • Now verify that cluster is running on this server and this server is the second node of the cluster1.


Click to view.




Cool, we are done with the first cluster, cluster1, set up and configuration and it is running fine.



1.3.3. Follow the same steps, step 1.3.1 –Initialization … and step 1.3.2 -NCS configuration… to setup and configure the remaining clusters, cluster2 and cluster3.



At this point all our clusters, cluster1, cluster2, cluster3 are configured and running. So we are ready to proceed with the BCC Installation.



2. Install BCC in all the nodes of all the clusters



As mentioned in the “Set up notes of eDirectory ,IDM and BCC components locations”, like NCS, BCC needs to be installed in all the nodes of the clusters which are part of BCC. So let us now start installing this is one server wgp-dt82.For other servers, we can repeat the same process.



2.1. Add the BCC admin user(s), bccadmin ( can be any name)—This is optional. If you wished to use another user to administer the BCC, then go ahead with this step else skip this as we can use same eDirectory admin to manage BCC.




  1. In iManager click on Users and then Create Users.


Click to view.


  • Fill in the required values and click on OK to create the user.

  • Click on OK to complete the task



2.2. Create BCC group bccgroup -all lower case (hard coded as of bcc1.2)



Given below are the steps to do this.


Steps:




  1. In iManager, click on Groups then Create Group.


Click to view.


  • Fill in the values in the required fields. Make sure to select the proper context, wherever applicable.

  • Click on OK to create the group. This will take you to the completion page.

  • Click OK to exit the wizard.



2.3. Now add the BCC admin user(s) bccadmin, as members of the group-bccgroup.



Given below are the steps to do this.



Steps:




  1. In iManager click on Groups then Modify Groups.

  • Click on object selector button.

  • From the object selector page, browse for the group and select the group by clicking on the group name, bccgroup. This will close the Object selector pop-up window.

  • Then click on OK to modify to the object.

  • Click on the Members tab then click on object selector button to bring up the object selector window.

  • Select the user, bccadmin from the Object selector browser and click on OK to complete the selection. This will close the object selector pop-up window.

  • Click on Apply to save the changes and then OK to complete the task.


Click to view.




2.4. LUM enable the group,bccgroup and include all the workstations of all the clusters


Given below are the steps to do this.



Steps:




  1. In iManager, click on Linux User Management then Enable Groups for Linux.

  • Click on object selector button to bring up the object selector browser on Step 1 of 2: Select groups page.

  • Select the group, bccgroup and click on OK to complete the selection. This will close the object selector browser.

  • Click on Next>> button on Step 1 of 2: Select groups page.

  • Click on Next>> button on Step 1a of 2: Confirm Selected Groups. This will take you to Step 2 of 2: Select Workstations page.

  • Click on the Object selector button to select the servers to bring up Object Selector browser.

  • Browse and select for all the servers (nodes) of all the clusters and click on OK to complete the selection. This will close the Object Selector browser and will take you back to Step 2 of 2: Select Workstations page.


Click to view.



  • Click on Next>>. This will take you to Summary page.

  • Click on Finish on the Summary page to complete the task.

  • Click on OK on Complete: Success page to exit the page.



2.5. Add the BCC admin user(s), bccadmin as trustee of the all the cluster objects –This is not required for eDirectory Tree Admin

Given below are the steps to do this.



Steps:




  1. In iManager, click on Rights then Modify Trustees.

  • Click on object selector button

  • Browse and select the cluster object ,cluster1 by clicking on the object name.


Click to view.


  • Click on OK

  • Click on Add Trustee button to bring up the object selector browser.

  • Select the user bccadmin from the object selector browser and click OK to complete the selection.

  • Now click on Assigned Rights link

  • Modify the Assigned Rights as per the requirement and click on the Done button. Let us give full access for bccadmin.

  • Then click on the Apply button to save the changes

  • Click on OK to complete: Modify Trustee successful page to exit the page.



2.6. Repeat the same process (step 2.3 : Add the BCC admin user(s), …. ) for all other cluster objects, cluster2, cluster3



2.7. Add the BCC admin user(s), bccadmin in the ncsgroup by editing /etc/groups file.



Below is how it can be done -Log in as the server as root and open the /etc/group file and find either of the following lines:



ncsgroup:!:107:

or

ncsgroup:!:107:bccd



The file should contain one of the above lines, but not both.



Depending on which line you find, edit the line to read as follows:


ncsgroup:!:107:bccadmin,<other users separated by comma if any>

or

ncsgroup:!:107:bccd,bccadmin,<other users separated by comma if any>



Replace bccadmin with the BCC Administrator user you created.


Notice the group ID number of the ncsgroup. In this example, the number 107 is used. This number can be different for each cluster node.



Let us start with wgp-dt82, the 1st node of cluster1. Let us put bccadmin and admin in the ncsgroup. Below is how it can be done.




Click to view.



Save the /etc/group file.



Execute the id <bcc admin user name> and verify that ncsgroup should appear as a secondary group of the BCC Administrator user(s).




Click to view.



2.8. Repeat step 2.7 in all the other servers of all clusters.



2.9. Download the BCC software and install the packages



Now let us download the BCC software in all the servers. Let us start with wgp-dt82.




  1. Login to wgp-dt82 as root user and open a terminal and type yast2 add-on.

  • Select the method to download the package (rpms) on Add-on Product Media page. We used HTTP so select HTTP and click on Next. This takes you to the next page.



Click to view.



  • Type the server name and location of the corresponding packages. This brings up the License Agreement page.

  • Select Yes I Agree to the License Agreement and click Next. This will take you to the next page.


Click to view.


  • Click on dropdown list of Filter and select Patterns from the drop down menu.


Click to view.


  • Now Novell Business Continuity Cluster will be shown under Additional Software section. Check that checkbox to select all the rpms (novell-business-continuity-cluster.rpm, yast2-novell-bcc.rpm and novellbusiness-continuity-cluster-idm.rpm package).

    Note: novellbusiness-continuity-cluster-idm.rpm is mandatory for the node where IDM will be installed and optional for other non-IDM nodes .So I select all the rpms for all nodes to makes things simple.






Click to view.



  • Click on Accept button to install the packages.

  • Click on No on the Install or remove more packages? Pop-up messages to complete and exit the package installation.



2.10. Configure BCC in all the nodes of the clusters



Now we can start BCC configuration. Let us start with wgp-dt82 of cluster1.




  1. Login to the server, wgp-dt82 as root and type yast2 novell-bcc in the server terminal to launch BCC configuration.


Click to view.




Click to view.



  • Click Continue if prompted to configure LDAP.



Click to view.



  • Specify the eDirectory admin password and click OK.


Click to view.



  • Select/Check both the Directory Server Address and click Next. This will take you to Novell Business Continuity Cluster(BCC) Configuration Summary page.

    Note: The already existing cluster would have been entered for you in the Existing Cluster DN field. So we do not need to modify anything. However we need to make sure that “Start Business Continuity Cluster Service Now “option is checked so that, once installation is completed, BCC software will start.




  • Click Next to install BCC and then click Finish to save and complete the BCC configuration and exit the wizard.

  • Verify that bcc software is running on the server now.


Click to view.





2.11. Repeat steps 2.9 to 2.10 in all the nodes of all the clusters.



3. Install IDM and create/configure bcc drivers



3.1. Install IDM on one node (32 bit server) in each clusters



We need to install IDM on all the IDM nodes (wgp-dt82,wgp-dt84,wgp-dt89).



To start with let us create BCC IDM driver for cluster1 in wgp-dt82.



Steps are given below:




  1. Login to wgp-dt82 and open terminal and start installation script install.bin, provided, by loop mounting it as shown below.

    Note: In the below screen IDM 3.6 downloaded is “Identity_Manager_3_6_Linux.iso”.





Click to view.




Click to view.



  • Select the language and Click OK. This will take you to License Agreement page:

  • Select I Accept the terms of the License agreement and click Next on License Agreement page. This will take you to the Select Components page.

  • On Select Components page just click on Next to go with default option. Default option is enough

  • Click on OK to the Identity Manager Activation Notice message to install with the 90 days trial period. Later it can be activated. This will take you to Authentication page.

  • On Authentication page specify the eDirectory admin DN and password and click on Next to get Pre-Installation Summary page.


Click to view.


  • Click Install on Pre-Installation Summary page wait till it completes and get the Install Complete page. It takes some time.

  • Click on Done on the Install Complete page to complete the installation and exit the wizard. Note that this page prompts to restart the application server.

  • Restart the application server (tomcat) as prompted in Install Complete page. To do this login to server ,where IDM is installed, as root and restart tomcat as shown below.



Click to view.


  • Now verify that the IDM plug-ins are installed in iManager of the same server. The plug-in are shown in the red box in the below clip just for our reference.



Click to view.





3.2. Repeat step 3.1 for the other two IDM nodes of the other two clusters cluster2 (wgp-dt84) and cluster3 (wgp-dt89)



3.3. Configure/Create BCC IDM drivers



Let us 1st configure BCC for two clusters (cluster1 and cluster2) and then join the 3rd cluster, cluster3 in it. This is because, BCC driver creation and configuration required some special care specially when we deal with three or more clusters. From this point let us keep the third cluster cluster3’s BCC IDM driver configuration aside for a while. We will come back at the step 6 for the same.



3.3.1. Create/Configure BCC driver for cluster1 to sync with cluster2



Login to the iManager of one of the IDM node where IDM plug-ins are installed to do this. Let us use wgp-dt82 ‘s iManager.




  1. In iManager, click Identity Manager > Identity Manager Overview.


Click to view.


  • click Driver Sets > New.



Click to view.



  • Specify the driver set name (cluster1Drivers) and click on the object selector button to browse and select the context (make sure to select the correct context as mentioned in “Set up Details” section in the beginning of this document), for this driver set.



Click to view.



  • Uncheck the “Create a new partition on this driver set” option and click OK on the pop up message and then click OK button to complete driver set creation and take you to on the Driver Set Overview page.



Click to view.



  • On the Driver Set Overview page, click Drivers.



Click to view.



  • Click on Add Driver from the pop-up menu.



Click to view.



  • Select that the driver set you just created, if not selected automatically in an existing driver set text box, then click Next.



Click to view.



  • Click on the object selector button to specify the DN of the server in this cluster that has IDM 3.6 installed on it. It is wgp-dt82 for this set up so select this server.



Click to view.



  • Click on Next.



Click to view.



  • Click on the Show drop down menu and select All Configurations.



Click to view.



  • Select the BCCClusterResourceSynchronization.xml file in the Configurations drop-down menu.



Click to view.



  • Click Next.

  • On the next page of Import Configuration page provide the following values:



Driver name: cluster1toCluster2BCCdriver

This name can be given anything but should be unique. I’ve given the driver name as.



Name of SSL Certificate: SSL CertificateDNS.

This certificate can be seen as follows-click on View Object and click on Organization Object, in the right side pane we can see the certificate object with this name.



DNS name of other IDM node: 164.99.103.84

IP address of the IDM server with which this driver will synchronize with. This is IDM node,wgp-dt84 of cluster2 in our BCC set up.


Port number for this driver: 2002

If you have a business continuity cluster that consists of three or four clusters, you must specify unique port numbers for each driver template set. The default port number is 2002. I have left the port number to default value-2002



Full Distinguished Name (DN) of the cluster this driver services: cluster1.cluster1.bcc

Specify or just browse using object selector button and select the current cluster,cluster1 here.



Fully Distinguished Name (DN) of the landing zone container: cluster1LandingZone .cluster1.bcc

This is the container where the cluster pool and volume objects in the other cluster are placed when they are synchronized to this cluster. The NCP server objects for the virtual server of a BCC enabled resource are also placed in the landing zone. I had already created a container called cluster1LandingZone-refer the eDirectory structure.
I have selected this container for landing zone for cluster1.




Click to view.




Click to view.



  • Click on Next.



Click to view.



  • Once the Next button is clicked, it will start importing the configuration as shown above. It takes a few minutes. So wait, do not do anything till the next page (shown below) comes up.



Click to view.



  • Click on Define Security Equivalences to bring up Security Equals wizard.



Click to view.



  • Click Add.



Click to view.



  • Now browse and select the desired User object(s) then click OK. Here I have selected users bccadmin and admin, the eDirectory admin.



Click to view.




  • Click Apply and then OK and come back to the Import Configuration page.




Click to view.




  • Click Next.



Click to view.



  • Click Finish to complete the configuration and exit this IDM configuration Wizard and get the next page-Driver Set Overview:



Click to view.





3.3.2. Create/Configure BCC driver for cluster2 to sync with cluster1




  1. In iManager, click Identity Manager > Identity Manager Overview.

  • Click on New below Driver Sets.

  • In Create Driver Set wizard, specify the driver set name as cluster2Drivers, context and deselect (disable) the Create a new partition on this driver set option, then click OK .This takes you to the Driver Set Overview page.

  • On the Driver Set Overview page, click Drivers under Overview.

  • Click Add Driver from the pop-up menu.This takes you to the Import Configuration page.

  • Verify that the driver set you just created, ie cluster2Drivers is specified in an existing driver set text box, then click Next.

  • Specify the DN of the server, wgp-dt84 in this cluster, cluster2 that has IDM 3.6 installed on it, then click Next.

  • Open the Show drop down menu and select All Configurations and select the BCCClusterResourceSynchronization.xml file in the Configurations drop-down menu, then click Next.

  • On the next page of Import Configuration page provide the following values:

    Driver name: cluster2toCluster1BCCdriver

    Name of SSL Certificate: SSL CertificateDNS

    DNS name of other IDM node: 164.99.103.82, the IP address of wgp-dt82, the cluster1 node

    Because this is the server where IDM is installed and cluster2 driver should sync with

    Port number for this driver: 2002, the default value

    Full Distinguished Name (DN) of the cluster this driver services: cluster2.cluster2.bcc

    Fully Distinguished Name (DN) of the landing zone container: cluster2LandingZone.cluster2.bcc


Click to view.





Click to view.



  • Click on Define Security Equivalences.

  • Click Add and browse and select the user objects bccadmin and admin, then click OK.

  • Click Apply and then OK on Security Equals wizard.

  • Click Next and then Finish on Import Configuration page. Now you will see this driver in the Driver Set Overview page.




Click to view.



3.3.3. Configure firewall to allow the BCC driver ports-if firewall is enabled.



Make sure that the BCC driver ports are allowed in the firewall if firewall is enabled on the IDM nodes. Let us start with the IDM node,wgp-dt82 of cluster1. Follow the below steps to do this.



Steps:




  1. Login to wgp-dt82 as root and type “yast2 firewall” in the terminal to launch Firewall Configuration:Start-up page

  • Click on Allowed Services link on the left column of the page

  • Click on Advanced.. on the right bottom to bring up Additional Allowed Ports page

  • Under TCP Ports, add the driver port(s) ,2002 and click OK

  • Click Next and then Accept.



3.4. Upgrade the drivers to use new enhanced IDM architecture and start the drivers.



Once you have created drivers for each cluster, you must upgrade them to the IDM 3.6 format ,the enhanced architecture.


Follow the below steps to do this.




  1. In iManager, click Identity Manager > Identity Manager Overview.



Click to view.



  • Click on the driver set link to bring up the Driver Set Overview. First I clicked on cluster1Drivers link



Click to view.



  • Click on the red Cluster Sync icon and you should be prompted to upgrade the driver to new enhanced architecture.



Click to view.



  • Click OK to upgrade the driver to use new enhanced IDM architecture.



Click to view.



  • Now let us start the driver. To do this click on the upper right corner of the Cluster Sync icon



Click to view.



  • Click on Start driver from the pop-up menu.



Click to view.




Now the color of upper right corner of the Cluster Sync icon should changed to green which means that this driver is started and running.

Repeat step 3.4 for cluster2 BCC driver, cluster2toCluster1BCCdriver by clicking on the driver set name “cluster2Drivers” on Identity Manager Overview page.

4. Configure the clusters for BCC



4.1. Enable BCC on each clusters




  1. In iManager, click on Clusters, then click the Cluster Options link.

  • Specify the cluster name, cluster1.cluster1.bcc or just click on object selector button and browse and select cluster1 object.



Click to view.



  • Click the Properties button and click on the Business Continuity tab.





Click to view.



  • Check(enable) Enable Business Continuity Features check box is selected.



Click to view.



  • Click OK to confirm.



Click to view.



  • Click Apply to and then OK to save the changes and complete the task.



Repeat step 4.1 for second cluster cluster2.

4.2. Adding Cluster Peer Credentials



In order for one cluster to connect to a second cluster, the first cluster must be able to authenticate to the second cluster. To make this possible, you must add the username and password of the user that the selected cluster will use to connect to the selected peer cluster.



This can be done only through command line as of BCC1.2 -Bug on BCC1.2.



Below is how we can do.



At the terminal console prompt, enter “cluster connections”.



[You should see both the clusters,cluster1 and cluster2. If all the clusters are not present, then either the IDM drivers are not synchronized or BCC is not properly enabled on the clusters. If synchronization is in progress, wait for it to complete, then try cluster connections again.]



For each cluster in the list, type “cluster credentials <cluster name> “ at the server console prompt, then enter the BCC admin username bccadmin or admin and password when prompted. I have used “admin” here. This is shown below.




Click to view.



4.3. Verify the Cluster connections


Now you should see the connections status are fine as shown below.




Click to view.



Repeat steps 4.2 and 4.3 in all the nodes of all the clusters and make sure that all the clusters listed in the cluster connections command and connection status is OK.



Note: At this point, all other nodes would be showing as “invalid credentials” as we have not set the credentials yet.


4.4. BCC enable the cluster resources



Create one pool and a volume in it using nssmu. Make sure that you create pool from the shared device /partition.(30GB). Let us do this in server wgp-dt82, a node of cluster1.




  1. Login to wgp-dt82 as root and open a terminal and type nssmu to launch NSS Management Utility

  • Select Pools from the Main menu and press Enter key.

  • Press Insert key to create a new pool.

  • Enter the Pool name,POOL1 and press Enter key.

  • Select the device from which the specified pool will be created. Make sure you selected the shared Device.

  • Specify the size of the pool and press Enter.

  • Assign the IP address of the pool and select Apply and then press Enter. This completes the pool creation on the shared device. Now to create volume in this pool follow next step.

  • Now press Esc key to come back to main menu of NSS Management Utility

  • Now select the Volumes from the main menu and press Enter.

  • Press Insert key

  • Enter the name of the volume,pool1vol1.

  • On the Encrypt Volume? Message type Y on N as per your choice. I chose N. Select the pool, POOL1 you have just created and press Enter. This completes the pool and volume creation through nssmu.(screenshot follows)

  • Press Esc and then Esc to exit the nssmu



Now let us BCC enable the same pool.



Steps:




  1. Login to iManager and In the left column of iManager, click Clusters, then click the Cluster Manager link and specify the cluster name cluster1.cluster1.bcc, or browse and select the cluster object,cluster1 [Note: Make sure to select the cluster where pools are created. Here I had created the pool POOL1 in wgp-dt82,which is a node of cluster1. So I have selected cluster1]


Click to view.



  • Click on the pool name, POOL1_SERVER here. This brings up Cluster Pool Properties page.

  • Click on Business Continuity tab and Check(enable) the Enable Business Continuity Features check box.(Screenshot follows). [Note: Make sure that the appropriate clusters are listed in the Assigned list. Here you should see cluster1 and cluster2.]



Click to view.



  • Click OK on the Pop-up message.

  • Click OK again to finish the task.



5. Verify that your BCC is working by doing BCC migration



Steps:




  1. In iManager, click Clusters, then click the BCC Manager link and specify the cluster name, cluster1.cluster1.bcc, or browse and select the same cluster.



Click to view.



  • Verify that the pool, POOL1_SERVER is seen under BCC Enabled Resources section.


Click to view.



  • Select the POOL1_SERVER and then click BCC Migrate.



Click to view.



  • Select the cluster where you want to migrate the selected resource, then click OK. I have selected cluster2. [Note: If you select Any Configured Peer as the destination cluster, the Business Continuity Clustering software chooses a destination cluster for you. The destination cluster that is chosen is the first cluster that is up in the peer clusters list for this resource.]

  • Wait till it’s state becomes Red and Secondary in the current cluster, cluster1 (screenshot below).



Click to view.



  • Now verify that the same pool is running in the target cluster,cluster2 now. To check this, select/type the cluster, cluster2.cluster2.bcc in Cluster field (screenshot follows). If its state is green and running now in this target cluster, then you BCC is working.



Click to view.




This completes the setting up of two clusters BCC and demo of BCC migration.



6. Add third cluster into the BCC



If you have three or more clusters in your business continuity cluster, you should set up synchronization drivers in a manner that prevents IDM loops. IDM loops can cause excessive network traffic and slow server communication and performance.



In this set up, cluster1 driver set, cluster1Drvers will accommodate two BCC drivers:


  1. cluster1tocluster2BCCdriver which runs on port number 2002 to sync with cluster2 BCC driver, cluster2tocluster1BCCdriver which runs on port number 2002 as shown in the diagram below.

  • cluster1tocluster3BCCdriver which runs on port number 2003 to sync with cluster3 BCC driver, cluster3tocluster1BCCdriver which runs on port number 2003 as shown in the diagram below.




Click to view.



There should not be any direct synchronization between cluster2 and cluster3 to avoid IDM loop. If you need one more cluster to be added, you can configure the driver to sync with any of the driver as long as loop is not formed.



6.1. Create one more BCC driver, cluster1tocluster3BCCdriver to sync with new cluster, cluster3, in cluster1’s existing driver set, cluster1Drivers.




  1. Login to iManager of one of the IDM node and click Identity Manager > Identity Manager Overview.

  • Click on the driver set link, cluster1Drivers below Driver Sets tab.

  • On the Driver Set Overview page, click Drivers under Overview.

  • Click Add Driver from the pop-up menu. This takes you to Import Configuration page.

  • Verify that the driver set you just selected i.e. cluster1Drivers is specified in an existing driver set text box, then click Next.

  • Open the Show drop down menu and select All Configurations and select the BCCClusterResourceSynchronization.xml file in the Configurations drop-down menu, then click Next. On the next page of Import Configuration page provide the following values (screenshots follow):



    Driver name: cluster1toCluster3BCCdriver.

    Name of SSL Certificate: SSL CertificateDNS

    DNS name of other IDM node: 164.99.103.89, the IP address of wgp-dt89, the IDM node of the new cluster,cluster3 node.

    Port number for this driver: 2003

    Full Distinguished Name (DN) of the cluster this driver services: cluster1.cluster1.bcc

    Fully Distinguished Name (DN) of the landing zone container: cluster1LandingZone.cluster1.bcc



Click to view.





Click to view.



  • Click on Define Security Equivalences.

  • Click Add and browse and select the user objects bccadmin and admin, then click OK.

  • Click Apply and then OK on Security Equals wizard.

  • Click Next and then Finish on Import Configuration page



Now you will see this new driver, cluster1toCluster3BCCdriver in Driver Set Overview page.




Click to view.




6.2. Create one more driver set,cluster3Drivers for new cluster,cluster3 with a driver, cluster3tocluster1BCCdriver in it to sync with cluster1



Repeat step 3.3.1 Make sure that the following values are entered. Values other than the ones mentioned below are same for all the driver configurations.




  1. Give the name of the new driver set as cluster3Drivers.

  • In the “Welcome to the Import Configuration Wizard” page give “cluster3Drivers.cluster3.bcc” in “In an existing driver set “ field.

  • In the “Import Configuration” page enter “wgp-dt89.cluster3.bcc” in the “Select a server to define its association:” as this is the IDM installed cluster3 node.

  • In the continuation of “Import Configuration” page, enter the following fields:

    Driver name: cluster3tocluster1BCCdriver

    Name of the SSL certificate: SSL CertificateDNS

    DNS name of the other IDM node: 164.99.103.82

    (the IP address of wgp-dt82, the IDM installed node of cluster1,
    with which this driver will sync with )

    Port number of this driver: 2003

    Full Distinguished Name (DN) of the cluster this driver services: cluster3.cluster3.bcc
    (the DN of this new cluster )

    Fully Distinguished Name (DN) of the landing zone container: cluster3LandingZone.cluster3.bcc

    (the landing zone for this new cluster)





6.3. Configure firewall to allow the ports for the new BCC driver



Follow the same procedure mentioned in step 3.3.3 with the new port number(s) 2003.

6.4. Now migrate and start the new drivers



cluster1tocluster3BCCdriver and cluster3tocluster1BCCdriver



Repeat step 3.4 for both the new drivers, cluster1tocluster3BCCdriver and cluster3tocluster1BCCdriver.

6.5. Setup cluster credentials



6.5.1. Setup cluster credentials from all the nodes of the old clusters, cluster1 and cluster2 to cluster3



Do the following tasks (shown in the clip - setting up the credentials using cluster credentials and verification of the cluster connections using ‘cluster connections’ command) from all the nodes of the clusters ,cluster1 and custer2.

Shown for wgp-dt82, one node of cluster1.




Click to view.



Make sure that you get the connection status OK. At this point, cluster connections are fine.



Repeat this for all other nodes of the clusters,cluster1 and cluster2 and make sure cluster connections are fine as above.



6.5.2. From all the nodes of the new cluster,cluster3 to all clusters, cluster1, cluster2, cluster3



Now in all the nodes of new cluster, cluster3 do the following tasks - setting up the credentials using cluster credentials and verification of the cluster connections using ‘cluster connections’ command.



In this setup I have only one server, wgp-dt89 in cluster3. So I have done this only in this node as shown below.




Click to view.



6.6. Synchronizing Identity Manager Drivers



If you are adding a new cluster to an existing business continuity cluster, you must synchronize the BCC-specific IDM drivers after you have created the BCC-specific IDM drivers. If the BCC-specific Identity Manager drivers are not synchronized, clusters cannot be used for BCC migration.



Synchronizing the IDM drivers is not necessary unless you are adding a new cluster to an existing BCC.



To synchronize the BCC-specific Identity Manager drivers follow the following steps:




  1. In iManager, click Identity Manager, then click the Identity Manager Overview.


Click to view.



  • Click on the driver sets link for the new cluster, cluster3Drivers under Driver Sets tab.


Click to view.



  • Click the red Cluster Sync icon for the driver you want to synchronize.



Click to view.



  • Click Migrate seen on the right most panel-Driver overview.



Click to view.



  • Click the Migrate from Identity Vault from the drop-down menu.



Click to view.



  • Click Add



Click to view.



  • Browse and select the Cluster object for the new cluster ,cluster3 you are adding to the BCC, then click OK.

    Note: Selecting the Cluster object for driver synchronization causes the BCC-specific Identity Manager drivers to synchronize.





Click to view.


  • Click Start to start synchronization.



Click to view.



  • Click on Close to complete the task.



6.7. Make cluster 3 in assigned list of the existing pools.


Steps:




  1. Login to iManager of any server in BCC then click on clusters > cluster options >

  • Specify/browse and select any one of the clusters. I have selected cluster1.cluster1.bcc

  • Click on the existing pool name link of your interest. This will bring up the Cluster Pool Properties page.

  • Click on the Business Continuity tab and verify that new cluster, cluster3 is shown as unassigned cluster under Resource Preferred Clusters section.(screenshot follows)



Click to view.



  • Now select the cluster3 from the Unassigned: list and click the left directed button to move it to Assigned: list (screenshot below)



Click to view.



  • Then click Apply and then OK to complete the task.

  • Verify that the new cluster, cluster3 is shown in the Available Peers list in Business Continuity Cluster Manager page (iManager > cluster > BCC Manager)



6.8. Verify the BCC setup by migrating pools from cluster1 to new cluster, cluster3.




  1. Login to iManager of one of the IDM node and click on Clusters > BCC manager > This brings up the Business Continuity Cluster Manager page

  • In the Business Continuity Cluster Manager page specify any cluster in Cluster field. I have selected cluster2.cluster2.bcc where pool1 is currently running (screenshot follows)



Click to view.



  • Under BCC Enabled Resources section select the existing pool POOL1_SERVER and click BCC Migrate. This brings up BCC Migrate page

  • Select cluster3 under Cluster Destination section and click OK to migrate POOL1_SERVER from cluster2 to cluster3.

  • Verify that state the pool, POOL1_SERVER in cluster1 and cluster2 changed to Secondary and Running in the destination cluster, cluster3 (Screenshot follows for cluster3).



Click to view.






At this point we are done with the BCC set up of three clusters and migration of BCC enabled cluster resources is also working. Hereafter any pool created and BCC enabled should be able to migrate from any cluster to any other cluster.


Comment List
Related
Recommended