Environment
Arcsight Platform 24.2
OMT version 24.1-20
Situation
There is a requirement to stop kubernetes services and reboot one or more cluster nodes. As per best practice there is a need to ensure that no nfs mounts exist before carrying out this task.
After running "kube-stop.sh" followed by "mount | grep nfs" nfs mounts are present.
See example below:
[root@controlplane1 ~]# kube-stop.sh
! WARNING: One or more master nodes are down. This action will halt all container runtime services (Containerd, Kubernetes, Apphub and/or suites) on this node. This may lead to a cluster failure. Do you really want to continue? [Y|N]Y
Draining node before stopping services .................................... Done
Stopping service kubelet .................................................. Stopped
Killing all containers .................................................... Done
Stopping service containerd ............................................... Stopped
INFO kubernetes infrastructure services are stopped and cluster node is marked inactive.
INFO Remember to run kube-start.sh if you restart this node.
[root@controlplane1 ~]# mount | grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
nfs.example.net:/opt/arcsight-nfs/itom-logging-vol on /opt/arcsight/kubernetes/data/kubelet/pods/acc7a7bb-0404-4540-967d-ea7eb11a91e3/volumes/kubernetes.io~nfs/itom-logging type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.94.73.236,local_lock=none,addr=10.94.73.227)