Wikis - Page

Deploying NetIQ Access Manager in AWS for High Availability and Fault Tolerance

0 Likes

With the NetIQ Access Manager 4.4 SP1 release, NAM is officially supported to be deployed in leading public clouds - AWS and Azure.

Customers can utilize the wide range of  infrastructure options available in the cloud to deploy NAM for high availability and fault tolerance.

In this cool solution, I will explain the deployment of NAM in AWS using the AWS Regions, availability zones, VPC and Subsets for high availability and fault tolerance.

Some AWS terminology used in this cool solution:

AWS Region: A geographical location for the AWS services. A specific region can be selected based on factors such as cost, latency, regulatory requirements...etc

AWS Availability zone: Each AWS Region provides two or more completely independent availability zones connected through low latency links.

AWS Virtual Private Cloud (VPC) :  It is virtual network in the AWS cloud which is logically isolated from other virtual networks in the AWS Cloud. VPC span in a single AWS region.

AWS Subnet:  Subnets are logical division of VPC for different purposes. Subnets are specific to individual availability zones.

Following are the high level steps for deploying NAM in AWS for a high availability and fault tolerance:

    1. Select the region for NAM deployments.

 

    1. Create Access Manager  VPC with a specific CIDR block

 

    1. Design and create the required subsets for Access Manager components

 

    1. Deploy the NAM components in the respective subsets



The above steps are illustrated in details in the following sections

[1] Select the region for NAM deployments.

Log in to AWS console. Choose the region list to the right of your account information on the navigation bar



For the purpose of this solution, we have selected the US East (Ohio) region. All the AWS services will be deployed in this region.

[2] Create Access Manager VPC with a specific CIDR block:

    1. Login to AWS console.

 

    1. In the services search and click the VPC service. Click on Your VPC from the side panel. For the first time you will see only the default VPCin the list of VPCs.

 

    1. Click Create VPC.

 

    1. In the Create VPC dialog box, provide the Name tag, IPv4 CIDR block for the new VPC as below and click Yes,Create.






With this you will have new VPC as below in the list of VPCs.



[3] Designing and creating the required subnets for Access Manager components:

We can leverage the different availability zones available in the AWS region for NAM deployment by dividing the VPC in to different subnets.

For security best practice it is recommended to place NAM components in two high level subnets -

    • nam-public-subnet : Internet routable subnet and also the devices can be accessed form the internet with a public IP address. Access Manger Identity Servers, Access Manager Access Gateways can be deployed in this subnet

 

    •  nam-private-subnet - Internet routable subnet but the access to the subnet from the internet is restricted by using NAT gateway. Access Manager Admin Console, LDAP user stores, back-end web-servers are need to be deployed in this subnet.



Note: The subnets in AWS spans only with in the AWS availability zone, so we have to create independent  subnets per nam device types and per availability zone.

For the example in this solution, we need to create total six subnets - three for nam-public-subnets and three for nam-private-subnets for each of the three availability zones of Ohio region.

Steps to create subnet in AWS:

    • Open the AWS VPC console

 

    • In the navigation pane, choose Subnets -> Create subnet

 

    • Provide the Name tag, select the VPC (nam-vpc), Availability zone, and IPv4 CIDR block as shown in the below figure.







    • Repeat the subnet creation for all the identified subnets in all the available zones

 

    • With this we will be having the subnet layout as below.






[4] Deploy the NAM components in the respective  subsets:

Now the Access Manger components can be deployed in different availability zones and the respective subnets as shown in the following diagram. For detailed steps on individual Access Manager components installation in AWS please refer to the Access Manger install guide.



In the above diagram

    • IDP1, IDP2, IDP3 - represents the different Identity Server instances.

 

    • MAG1, MAG2, MAG3 - represents the different Access Gateway instances.

 

    • AC1, AC2, AC3 - represents the different Administration Console instances.

 

    • LDAP1, LDAP2, LDAP3 - represents the different LDAP user store instances.

 

    • WEB1, WEB2, WEB3  - represents the different web-server instances.



High Availability and Fault tolerance scenarios:

Scenario 1: Individual Access Manager or any other associated components failures.



Any individual NAM components failure ( IDP1, AC2, MAG2 and LDAP2 in the above examples)  are handled be the other active instances in the same or other availability zones.

Scenario 2:  The Availability zone failure.



In case of the entire availability zone failure ( us-east-2b in the above example ) are handle by active nodes in the other active availability zones.

Conclusion:  While deploying NetIQ Access Manager in public cloud, we can leverage the available infrastructure components of the cloud ( for example VPC, Availability zone and subnets in case AWS ).With this we can achieve high availability and fault tolerance for Access Manager services. This solution demonstrates one such example on AWS.

Labels:

How To-Best Practice
Support Tips/Knowledge Docs
Support Tip
Comment List
Related
Recommended