OpenText product name changes coming to the community soon! Learn more.

Wikis - Page

NetIQ Access Manager : How to create preconfigured AMI of NAM with plain docker and bridge network adapter?

2 Likes

Overview

 I had a requirement to create an Amazon Machine Image (AMI) for NetIQ Access Manager which will have preinstalled and preconfigured components.

This AMI could be consumed as-it-is by the sales/presales team and the education team can demonstrate pre-configured use cases.

This AMI should serve as a Golden Image and could be to spin off a new VM, which will have preinstalled, pre-configured, adn ready to demonstrate NetIQ Access Manager.

As a part of the requirement, we need to create a template AMI which will not only have the Access Manager and its components viz. (Admin Console, Identity Provider, Access Gateway) but also sample demo applications including SAML client and, OAuth Client.

These templates will be created so that when a new environment has been created the configuration, IP, hostname, and URLs of a deployed component should not change.

This was the mandatory requirement from the education team as their training course typically has instructions specific to hostname, URLs, and IPs.  Any change in the hostnames, URLs, or IPs will require a different set of instructions for each student/participant, which is not a practical approach.

 

Brainstorming and Analysis

Since AWS does not have an easy option to deploy an appliance and configure it, this option was ruled out.

The other option was to use the container version of Access Manager on AWS, but Access Manager is supported on Kubernetes container orchestrator, it does not support plain docker installation. It needs a minimum of two virtual machines, one as a master and the other as a slave node. This was again ruled out since it will bring dependency on VM Private IPs which is specific to a VM in the same VPC(virtual private cloud).

The use of minikube Kubernetes was also ruled out, though it can have a single machine as master and slave node, the configuration will be bound with the host VM IP.

The practical idea was to create a bridge docker network within a specific private IP range. This bridge network will be specific to the host VM. Now I can install the Access Manager using this bridge network, this will bind the access manager components to the bridge network IPs. Now if I create an AMI from this install and spin off a new VM, the new VM will have a new IP, however, the access manager will be bound to the bridge network within the host VM and they will retain the IP, hostname, and URLs. This configuration will give the same setup to every user of this AMI, the host IP will be specific to him, however everything within the VM, the components, IPs, and configurations will be the same for all VMs spun off from that AMI. More on the bridge network is here https://docs.docker.com/network/bridge/

 On searching In the MicroFocus community, I came across an article on deploying Access Manager using plain docker(https://community.microfocus.com/cyberres/netiq-access-management/accessmanager/w/access_manager_tips/40522/setup-access-manager-lab-using-plain-docker). However, this deployment was using host network driver which binds docker container with host VM IP. But it gave me a start.

More info on host networking can be found here (https://docs.docker.com/network/host/).  

Prerequisites

I created a base VM using SLES 15 SP3.  Then, I installed docker standalone 20.10.17-ce and Docker-compose 1.25.3.

Why docker-compose? I will answer it later.

Get AM docker images

Disclaimer: If you want to play with AM, you need to own an Access Manager license. You can also request an evaluation license, but I cannot find that option on the existing Microfocus website.

There are two ways of getting AM docker images. One is using the docker pull command and the other is to manually download Access manager docker images from Microfocus SLD (file AM_502_Containers.tar.gz) and then load them into docker using the command

docker load --input AM_502_Containers.tar.gz

I’ve decided to use the first approach. 

Step 1: Create a bridge driver

docker network create \
  --driver=bridge \
  --subnet=172.17.5.0/24 \
  --ip-range=172.17.5.0/24 \
  --gateway=172.17.5.1 \
  idmbridge

In the above-mentioned script, you can note that our starting IP Address is 172.17.5.0 and the end IP Address is 172.17.5.255, You can change the IP range as per your need. The name of the driver is idmbridge.

 

Step 2: define the environment variables (.env file)

adminuser="admin"
adminpwd="n0v3ll"
hostip=172.31.32.246
edirstorage="/data/amdata/am/data-edir"
acstorage="/data/amdata/am/data-ac"
idpstorage="/data/amdata/am/data-idp"
agstorage="/data/amdata/am/data-agw" 

Step 3: Create NAM Data folders (created a createamenv.sh)

AM administrator username and password are passed to each container by creating files in <storage>/config/secret folder.

Note: Files with administrator username and password are automatically removed after container configuration.

export adminuser="admin"
export adminpwd="n0v3ll"
export hostip=172.31.32.246
export edirstorage="/data/amdata/am/data-edir" # eDirectory container storage folder
export acstorage="/data/amdata/am/data-ac" # Admin Console container storage folder
export idpstorage="/data/amdata/am/data-idp" # Identity Server container storage folder
export agstorage="/data/amdata/am/data-agw" # Access Gateway container storage folder
export timezonestorage="/data/amdata/am/timezone" # Timezone storage folder (used by all containers)

echo "Creating folders"
echo ${timezonestorage}
mkdir -p ${timezonestorage}
cp /etc/localtime ${timezonestorage}/
echo ${edirstorage}
mkdir -p ${edirstorage}
echo "creating secret files ...."
echo ${adminpwd}>${edirstorage}/admin_password
for secretpath in ${acstorage}/config/secret ${idpstorage}/config/secret ${agstorage}/config/secret; do
echo ${secretpath}
mkdir -p ${secretpath}
echo ${adminuser}>${secretpath}/admin_name
echo ${adminpwd}>${secretpath}/admin_password
done
chmod -R 7777 /data/amdata 

Step 3: Create a “docker-compose.yml” file

# Relies on version 2.1 to handle env variable defaults: https://docs.docker.com/compose/compose-file/#variable-substitution
services:
  amag:
    image: am-ag:5.0.2.0-309
    depends_on:
      - edir
      - adminconsole
      - amidp
    hostname: namag1.cyberxdemo.com
    container_name: namag1
    volumes:
      - ${timezonestorage}/localtime:/etc/localtime
      - ${agstorage}/config/jcc:/opt/novell/devman/jcc/conf/runtime
      - ${agstorage}/config/jcc_certs:/opt/novell/devman/jcc/certs
      - ${agstorage}/config/esp:/opt/novell/nesp/lib/webapp/WEB-INF/conf/runtime
      - ${agstorage}/config/agm:/opt/novell/nam/mag/webapps/agm/WEB-INF/conf/runtime
      - ${agstorage}/config/apache_cacerts:/etc/opt/novell/apache2/conf/cacerts
      - ${agstorage}/config/apache_certs:/etc/opt/novell/apache2/conf/certs
      - ${agstorage}/config/apache_clientcerts:/etc/opt/novell/apache2/conf/clientcerts
      - ${agstorage}/config/cache:/var/cache/novell-apache2
      - ${agstorage}/config/syslog:/opt/novell/syslog
      - ${agstorage}/config/apache_current:/opt/novell/nam/mag/webapps/agm/WEB-INF/config/current
      - ${agstorage}/config/apache_conf:/opt/novell/nam/mag/webapps/agm/WEB-INF/config/apache2
      - ${agstorage}/config/secret:/opt/novell/nam/docker/secret
      - ${agstorage}/config/default_files:/opt/novell/nam/default_configfiles_productbackup
      - ${agstorage}/logs/custom:/opt/novell/nam/docker/log_volume
      - ${agstorage}/logs/tomcat:/var/opt/novell/nam/logs/mag/tomcat
      - ${agstorage}/logs/nesp:/var/opt/novell/nam/logs/mag/nesp/nidplogs
      - ${agstorage}/logs/amlogging:/var/opt/novell/amlogging/logs
      - ${agstorage}/logs/jcc:/var/opt/novell/nam/logs/jcc
      - ${agstorage}/logs/apache2:/var/log/novell-apache2
      - ${agstorage}/logs/activemq:/var/log/activemq
      - ${agstorage}/logs/proxylogs:/var/log/novell/reverse
      - ${agstorage}/logs/configuration:/tmp/novell_access_manager
      - ${agstorage}/logs/syslog:/var/opt/novell/syslog
      - ${agstorage}/custom/other_customization:/opt/novell/nam/docker/custom_volume
      - ${agstorage}/custom/lists:/opt/novell/nam/docker/lists/runtime
      - /etc/hosts:/etc/hosts
    environment:
      admin_name: ${adminuser}
      ac_ip: 172.17.5.101
    cap_drop:
      - ALL
    cap_add:
      - CHOWN
      - KILL
      - FOWNER
      - DAC_OVERRIDE
      - SETGID
      - SETUID
      - AUDIT_WRITE
      - NET_BIND_SERVICE
    networks:
      default:
        ipv4_address: 172.17.5.103 
  amidp:
    image: am-idp:5.0.2.0-309
    depends_on:
      - edir
      - adminconsole
    hostname: namidp1.cyberxdemo.com
    container_name: namidp1
    volumes:
      - ${timezonestorage}/localtime:/etc/localtime
      - ${idpstorage}/config/jcc:/opt/novell/devman/jcc/conf/runtime
      - ${idpstorage}/config/certs:/opt/novell/devman/jcc/certs
      - ${idpstorage}/config/nidp:/opt/novell/nids/lib/webapp/WEB-INF/conf/runtime
      - ${idpstorage}/config/syslog:/opt/novell/syslog
      - ${idpstorage}/config/plugins:/opt/novell/nam/idp/plugins
      - ${idpstorage}/config/secret:/opt/novell/nam/docker/secret
      - ${idpstorage}/config/default_files:/opt/novell/nam/default_configfiles_productbackup
      - ${idpstorage}/logs/tomcat:/var/opt/novell/nam/logs/idp/tomcat
      - ${idpstorage}/logs/nidp:/var/opt/novell/nam/logs/idp/nidplogs
      - ${idpstorage}/logs/jcc:/opt/novell/devman/jcc/logs
      - ${idpstorage}/logs/custom:/opt/novell/nam/docker/log_volume
      - ${idpstorage}/logs/configuration:/tmp/novell_access_manager
      - ${idpstorage}/logs/syslog:/var/opt/novell/syslog
      - ${idpstorage}/custom/other_customization:/opt/novell/nam/docker/custom_volume
      - ${idpstorage}/custom/lists:/opt/novell/nam/docker/lists/runtime
      - /etc/hosts:/etc/hosts
    environment:
      admin_name: ${adminuser}
      ac_ip: 172.17.5.101
    cap_drop:
      - ALL
    cap_add:
      - CHOWN
      - KILL
      - FOWNER
      - DAC_OVERRIDE
      - SETGID
      - SETUID
      - AUDIT_WRITE
      - NET_BIND_SERVICE
    networks:
      default:
        ipv4_address: 172.17.5.102
  edir:
    image: am-edir:9.2.5.0
    hostname: namac1.cyberxdemo.com
    container_name: namac1
    ports:
      - "524"
      - "8028"
      - "8030"
      - "389"
      - "636"
    volumes:
      - ${timezonestorage}/localtime:/etc/localtime
      - ${edirstorage}:/config/eDirectory
      - /etc/hosts:/etc/hosts
    environment:
      admin_name: ${adminuser}
      ac_ip: 172.17.5.101
      NDSD_DISABLE_CRL_CONFIG: 1
    cap_drop:
      - ALL
    cap_add:
      - CHOWN
      - KILL
      - FOWNER
      - DAC_OVERRIDE
      - SETGID
      - SETUID
      - AUDIT_WRITE
      - NET_BIND_SERVICE
    networks:
      default:
        ipv4_address: 172.17.5.101
    command: new -t gnl-am-tree -n o=novell -S namedir1 -B 172.17.5.101@524 -o 8028 -O 8030 -L 389 -l 636 -a cn=admin.o=novell -w file:/config/eDirectory/admin_password --configure-eba-now no
  adminconsole:
    image: am-ac:5.0.2.0-309
    depends_on:
      - edir
    volumes:
      - /etc/hosts:/etc/hosts
      - ${timezonestorage}/localtime:/etc/localtime
      - ${edirstorage}:/var/opt/novell/eDirectory
      - ${acstorage}/logs/tomcat:/var/opt/novell/nam/logs/adminconsole/tomcat
      - ${acstorage}/logs/volera:/var/opt/novell/nam/logs/adminconsole/volera
      - ${acstorage}/logs/configuration:/tmp/novell_access_manager
      - ${acstorage}/logs/syslog:/var/opt/novell/syslog
      - ${acstorage}/config/certs:/var/opt/novell/novlwww
      - ${acstorage}/config/iManager:/var/opt/novell/iManager/nps/WEB-INF/config
      - ${acstorage}/config/secret:/opt/novell/nam/docker/secret
      - ${acstorage}/config/data:/opt/novell/nam/adminconsole/data
      - ${acstorage}/config/default_files:/opt/novell/nam/default_configfiles_productbackup
      - ${acstorage}/custom/other_customization:/opt/novell/nam/docker/custom_volume
    environment:
      ac_ip: 172.17.5.101
    network_mode: service:edir
networks:
  default:
    name: idmbridge
    external: true 

Explanation on docker-compose.yml

We are creating four services, i.e., amag (access gateway), amidp(Identity Provider), adminconsole(Admin Console), and (eDirectory). The eDirectory does not have a dependency, the adminconsole is dependent on edir, amidp is dependent on edir and adminconsole and finally, amag is dependent on edir, adminconsole and amidp.

The environment uses idmbridge as a network driver. The edir and adminconsole share the same IP 172.17.5.101. The admin console looks for the eDirectory on the same IP, so both the components must be bound on the same IP. This can not be achieved using “docker run”.

So, I was forced to use docker-compose.

We have given 3 IP and 3 host names to the Access Manager components.  The volume command is mapping the data folder structure to docker containers.

The rest of docker-compose.yml is self-explanatory.

 

Step 4: Start the NAM environment

chmod +x *.sh
./createamenv.sh
./docker-compose up -d

 Please note createamenv.sh should be executed just once.

Step 5: Stop the NAM environment

./docker-compose down

 Troubleshooting

To see the installation progress, run:

eDirectory

docker logs -f namac1

 Installation is done when you see something like:

Configuring NMAS service... Done
Configuring SecretStore... Done
Configuring HTTP Server with default SSL CertificateDNS certificate... Done
Configuring LDAP Server with default SSL CertificateDNS certificate... Done
The instance at /config/eDirectory/inst/conf/nds.conf is successfully configured.
Creating version file...done
done
Press ctrl+p ctrl+q to continue. This would detach you from the container.

 Admin Console container

To see the installation progress, run:

docker logs -f aminstall_adminconsole_1

 Installation is done when you see something like:

For information regarding this installation check the log file directory at /tmp/novell_access_manager.
Installation is complete.
Press ctrl+p ctrl+q to continue. This would detach you from the container.
Now you can log into admin console using URL: https://namac1.cyberxdemo.com:2443/nps

 Identity Server container

To see the installation progress, run:

docker logs -f namidp1

 Installation is done when you see something like:

For information regarding this installation check the log file directory at /tmp/novell_access_manager.
To configure the installed service, log into the Administration Console at https://namac1.cyberxdemo.com:8443/nps using the user ID "am-admin".
Installation is complete.

Press ctrl+p ctrl+q to continue. This would detach you from the container.

 As you can see, Admin Console URL’s port mentioned in the output is wrong (should be 2443, not 8443), but I assume this is just some hardcoded message in installation scripts.

 Now log into the Admin console and wait until you see the Identity Server imported. Be patient because it might take some time:

 Access Gateway container

To see the installation progress, run:

docker logs -f namag1

 Installation is done when you see something like:

 For information regarding this installation check the log file directory at /tmp/novell_access_manager.
To configure the installed service, log into the Administration Console at https://namac1.cyberxdemo.com:8443/nps using the user ID "am-admin".
Installation is complete.

Press ctrl+p ctrl+q to continue. This would detach you from the container.

 As you can see, the Admin Console port is wrong here, too. Now log into the Admin console and wait until you see the Identity Server imported. Again, it might take some time: 

Conclusion

After this setup, I configured the Identity Provider with an eDirectory instance running in another eDirectory instance on the same idmbridge bridge driver.  I also configured the access gateway.

As a demo application, I integrated the Access manager with Salesforce using SAML2 integration.

Once this setup was done, I stopped all services and created the AMI. Using this AMI, I spanned a few more VMs. Each VM has a working Access Manager. The integration with Salesforce is also working fine in all spanned VMs.

But please keep in mind that this setup is useful for lab/test and not production. And of course, it is not supported by Microfocus.

Labels:

Access Manager
Comment List
Related
Recommended