Cybersecurity
DevOps Cloud
IT Operations Cloud
This quick setup guide helps in setting up a kubernetes cluster on Ubuntu system which can later be used to deploy NetIQ Access Manager Docker images (Beta).
In this example we will create 2 node Kubernetes cluster.
These will be self managed nodes and any updates to Virtual Machine or Kubernetes will need to be managed by the administrator creating the nodes.
Lets start by creating 2 Ubuntu Virtual Machines, lets name them as kube-master and other as kubenode-1.
We can have more than 2 virtual machine too and the consecutive Virtual Machines will added as nodes/worker nodes.
In the below approach, we will adopt the kubeadm feature of kubernetes to setup the cluster and use Flannel Networking feature.
Create 2 Ubuntu VMs and setup necessary networking/IP etc and Set Unique hostname (/etc/hostname) and then Update /etc/hosts on both the boxes.
Disable swap by executing "swapoff -a" on both the boxes.
Reboot the boxes.
Example uses "Ubuntu 18.04.1 LTS".
High-level Tasks:
1) Installing Binaries on all the nodes
2) Configuring Kubernetes on master node
3) Join Nodes to Kubernetes master
4) Deploy NetIQ Access Manager Docker images using helm
Note: The below needs to be executed on all the nodes.
Step 1:
Install Docker Engine:
apt-get update
apt-get install -y docker.io
Step 2:
Install Kubernetes binaries:
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
(Optional)sudo apt-mark hold kubelet kubeadm kubectl
Enable docker service:
systemctl enable docker.service
Now, the required binaries are installed.
NetIQ Access Manager deploys the image using Helm tool.
It's an easy to use and manager kubernetes yaml files and acts as a wrapper.
Here we are installing Helm version 3.0 on the Master node by executing the below command.
Install Helm v 3.0:
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
Execute the below commands on Node which needs to be designated as master.
***************************************
swapoff -a
(kubelet, a kubernetes agent software doesn't work when swap is enabled
for more information refer:
https://github.com/kubernetes/kubernetes/issues/53533)
kubeadm init --pod-network-cidr=10.244.0.0/16
*** (record the output of the above command
Output displays a join command needs to be executed later to add nodes to kubernetes cluster)
sysctl net.bridge.bridge-nf-call-iptables=1
export KUBECONFIG=/etc/kubernetes/admin.conf
I chose Flannel network:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
***************************************
Now that the Master is configured, let's verify if things are up on the Master Node.
Verification:
kubectl get pods --all-namespaces
Terminal Output would appear as below
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-9kcdp 1/1 Running 0 4m26s
kube-system coredns-5644d7b6d9-pt2g9 1/1 Running 0 4m26s
kube-system etcd-kube-master 1/1 Running 0 3m46s
kube-system kube-apiserver-kube-master 1/1 Running 0 3m26s
kube-system kube-controller-manager-kube-master 1/1 Running 0 3m46s
kube-system kube-flannel-ds-amd64-h9dgq 1/1 Running 0 33s
kube-system kube-proxy-2vjdt 1/1 Running 0 4m26s
kube-system kube-scheduler-kube-master 1/1 Running 0 3m45s
***************************************
Execute the below command on all Nodes which needs to be designated as worker node.
(the token will differ and the below is just an example)
***************************************
kubeadm join 10.71.131.170:6443 --token 0y9jwq.zkixei59b6r7rbh8 --discovery-token-ca-cert-hash sha256:7b93e15d454089885fc3ec12a832579dda9a84171e944ff2b745075da38ebc71
***************************************
The cluster is configured and it's up and running.
Let's verify if things are as expected.
Execute the below on Master Node to verify the cluster status.
kubectl get nodes
Output should be similar to below:
NAME STATUS ROLES AGE VERSION
kube-master Ready master 7m58s v1.16.3
kubenode1 Ready <none> 2m27s v1.16.3
Now, follow along the NetIQ Access Manager docker deployment documentation to deploy NAM docker images to Kubernetes Cluster.
Refer to:
beta release https://community.microfocus.com/t5/Beta-Release-of-NetIQ-Access/Announcement-Beta-Release-of-deploying-Access-Manager-in-Docker/m-p/2832015#M1