Bare Metal Recovery / Disaster Recovery (Failed to start omni.socket)- Data Protector 11.0.1

Dear , 

Hope you are doing well! 

Having problem with one of the system while doing Disaster Recovery (Getting Error Failed to start OMNI.SOCKET) I have observe we have XFS file system and while doing backup of it (Getting warnings like 

Warning] From: abc.local "abc..local [/]"  Time: 4/24/2024 5:38:00 PM

     /run

     Directory is a mount point to a different filesystem.

     Backed up as empty directory without extended attributes and ACLs.

 We had tried every below mentioned: 

1- OMNIRC file (Enable ob2_allow_remote_mount_point=1) but when we using it , It started backup whole cluster storage volume like size we can say 128TB while actual volume is 100GB. This server is created on VM and we assign 100Gb to it)

Data Protector 11.0.1 using (Tested on P1 / P2 Patch) - FAILED

Don't understand how and why this error always there if the problem with backup what is the solution? or is there anything with PHASE 1 recovery where its saying FAILED TO START OMNI.SOCKET how it will resolve?

Please help! 

Filesystem                     Type     1K-blocks     Used Available Use% Mounted on

devtmpfs                       devtmpfs   5940644        0   5940644   0% /dev

tmpfs                          tmpfs      5970620       64   5970556   1% /dev/shm

tmpfs                          tmpfs      5970620     1096   5969524   1% /run

tmpfs                          tmpfs      5970620        0   5970620   0% /sys/fs/cgroup

/dev/mapper/rhel xfs       99388008 34689628  64698380  35% /

/dev/sda2                      xfs        1038336   365936    672400  36% /boot

/dev/sda1                      vfat        204580     9748    194832   5% /boot/efi

tmpfs                          tmpfs      1194124       16   1194108   1% /run/user/42

tmpfs                          tmpfs      1194124        0   1194124   0% /run/user/0


# /etc/fstab

# Created by anaconda on Thu Jan 27 15:49:59 2022 # # Accessible filesystems, by reference, are maintained under '/dev/disk'

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info #

/dev/mapper/rhel_root /                       xfs     defaults        0 0

UUID=22311905-f85a-4f15-9078-8da15e5bdb9d /boot                   xfs     defaults        0 0

UUID=8D6A-84BA          /boot/efi               vfat    umask=0077,shortname=winnt 0 0

/dev/mapper/rhel_ swap                    swap    defaults        0 0

 

  • 0  

    The first question is: why is /run seen as a separate file system? If you browse this client in the Backup context in the GUI then do you see /run as a separate object or is this directory only seen under the root (/) object?

    Next, the "FAILED TO START OMNI.SOCKET" looks like a completely different issue, but may be related after all. I see a systemd directory under /run when I check on a test system. So maybe you should concentrate on the /run problem first of all. Once that's fully understood, it should be clear whether the DR problem is something else or just a consequence.

  • 0 in reply to   

    Hi, 

    Thanks for the response! 
    I can see /RUN is in the ROOT (/) DIRECTORY.

    In FILE SYSTEM BACKUP I can see 4 directories 

    1- / ----------------------> Under this directory I can see /RUN.
    2- /BOOT
    3- /BOOT/EFI
    4- /CONFIGURATION 


    We are running Hyper-V Environment and We have SAN Cluster Storage (On FAILOVER CLUSTER we have assign Share Volume of 1.2 TB Volume from SAN Cluster Storage 3PAR)

    In HyperV we created VM and assign HDD by selecting SAN directory (c:/clusterstorage/volume/server.vhdx) and volume size is 100GB. 

  • 0   in reply to   

    While you should be able to get this working, aren't there much easier ways of DR on a Hyper-V level which would avoid going through the pain of setting up and testing EADR? Just a suggestion, not saying that this won't work!

  • 0 in reply to   

    Sorry i didn't understand your last statement ! Would you please clarify more so may be I will pick your point of view!

    While you should be able to get this working, aren't there much easier ways of DR on a Hyper-V level which would avoid going through the pain of setting up and testing EADR?

  • 0   in reply to   

    I would need to take a closer look as I'm not too familiar with Hyper-V. But since this is a VM I'm sure it would be easier to just restore a snapshot of it rather than backing up this system as a physical client + performing DR. 

  • 0 in reply to   

    Thanks for the response Actually I worked on WINDOWS for the same its work for me like DR process we have same setup on Windows DR and it was successful , If we go with the physical server Linux (Same settings are present there too) our motive is to see how we can do perform this DR , Because on VM level we have options but on Linux PHYSICAL SERVER RECOVERY we will stuck in future so right now the same scenario related to file system or directories are same on Linux PHYSICAL SERVER so I am sure the same problem will occur on physical server as well. 

    That's why I am looking for the solution and guidance where and what is wrong and how we can get rid of it, Because I am working too hard for finding its solution but nothing happened till now. 

  • 0   in reply to   

    OK, I understand and this is making sense. The idea is to test this to be applied to a physical server. Got it.

    I suggested to the engineer assigned to your case to get the ISO, so we can take a closer look and try to reproduce this. I think you have sent it already. I'll let the support guys do their work. Feel free to share any questions, doubts or the solution here again. Thanks.

  • 0 in reply to   

    Yes you are right! One of your representative asked for ISO I already share with them and waiting for their response! 
    But it will be good if any new suggestion or finding according to my issue will be found so please share with me, I will be looking forward for great support and response! 

  • Verified Answer

    +1 in reply to 

     Found the solution of this problem is one of Microfocus Representative shares and CPE also involve in it. 

    Problem with XINETD services (What I did uninstall XINETD services) and did perform this activity

    yum erase xinetd -> Command with the help of it XINETD remove

    Thanks for your help !!!! Slight smile