Resizing XFS Groupwise Partition

I have a Groupwise 24.3 system and the Groupwise partition is nearly full.   The partition is XFS inside a SLES 12 VM.  The hypervisor host is Hyper-V.  Would this work to expand the XFS partition?

1) Backup VM

2) Shutdown VM

3) Use Hyper-V tools to expand virtual disk by 100 GB

4) Boot VM

5) Once booted, shutdown all Groupwise services

6) Unmount the Groupwise partition:  umount /opt

7) Use parted to increase size of the partition:   parted /dev/sdc resize 1 100%

8) Remount the partition:  mount /dev/sdc1 /opt

9) Grow partition size to match drive size:   xfs_growfs /opt

10) Operation now complete. Start Groupwise services.

Thoughts?  Am I missing any steps?

  • 0  

    after step 4:

    4. GParted ISO image in VM Config

    5. boot from ISO

    6. resize the partition with Gparted

    7. shutdown

    8. ISO from VM Config

    9. start - GroupWise and VM run with changed disk

    approx 10 min off work

    George

    “You can't teach a person anything, you can only help them to discover it within themselves.” Galileo Galilei

  • 0 in reply to   

    I'm not following "GParted ISO image..."

  • 0   in reply to 

    GParted is free partition editor (https://gparted.org/) which comes as bootable ISO image.

    I also use it a lot and it is quite easy to use. Just boot VM from GParted ISO image and do whatever you need to do with partitions.

    Kind regards,

    Sebastijan

    If you found this post useful, give it a “Like” or click on "Verify Answer" under the "More" button

  • 0  

    I am not sure if umount /opt will work because some other services will use it too.

    Seeing this discussion I  agree to use gparted. I used it in many different situations too.

    However, here is my topic why I step in. You should consider to migrate to Sles15 because Sles12 support has been stopped. So maybe a migration and a larger disc can be combined ..


    Use "Verified Answers" if your problem/issue has been solved!

  • 0  

    @Diethmar and Sebastijan thanks to you for the additions to gparted. Gparted performs disk checks before and after the expansion and also warns if there are defects in the mounted file systems. 

    in the few lines that can be read in my post I still have a few things in my head that I should have written. xfs_growfs cannot be used on its own, you always have to run a filesystem check first and do the same again after the expansion. This is how allocation errors or logical / physical errors in the xfs partition are detected. Nothing is worse than a defect for a file based database system which GroupWise basically is.

    Then to the SLES version. The migration of GroupWise from A to B is a bit too complex if you follow the KM and the documentation. The dance with the internal certificates is not witchcraft for experts, but those who do this for the first time may get bumps. That's why I understand that sometimes a system is driven on the last groove until it really doesn't work anymore.

    George

    “You can't teach a person anything, you can only help them to discover it within themselves.” Galileo Galilei

  • 0   in reply to   

    I love migrations to new servers Innocent

    I do not use the documentation but my experiences. So, I do not use dbcopy but rsync. Rsync because it is a lot faster. I prepare a new server with a temporary address. I copy all data with rsync (incremental; every night i.e). However I copy /opt/novell/grupwise, /etc/opt/novell ... and some more directories too.

    Especially /opt/novell/groupwise contains certificates which can be used in the same way if you do not change server name or ip address. Thumbsup

    If you change ip address or server name, then you have to struggle with certificates.


    Use "Verified Answers" if your problem/issue has been solved!

  • 0   in reply to   

    Hello Diethmar, You just know how to drive the GroupWise car properly ,-), 

    If a gw system is not too big and there are VMs: dd in | dd out with a few parameters, I can move each filesystem from a to b. A dd with a 100 gb xfs for gwise is done in 10 to 15 minutes if the i/o is right. I also take the opportunity to change the design of the target VM a little, e.g. for root fs a partition and a separate controller path and the same for the new gwise partition.

    Then what you describe.

    There are a few little tricks that I use, regarding IP FQDN, MAC address etc. Otherwise as you describe, rsync or dbcopy.  

    If I have the possibility to use a professional replication product, a migration from GroupWise on fly with an operating system change, which is too much to describe here, I have moved GW systems that had 50 TB of data and more from a to b without downtime. Well not quite, switching from a to b is a small downtime.

    In the case here because SLES 12 SP5 is to be retained, I would do something similar. Mount a new partition. VM down, copy with dd. In the VM config, swap the order of the old and new partitions and after the boot, depending on how the fstab and mtab look, GroupWise starts immediately or the fstab has to be adjusted and a restart executed. The advantage here is that the old GW data is immediately mountable if something should have happened

    “You can't teach a person anything, you can only help them to discover it within themselves.” Galileo Galilei

  • 0 in reply to   

    Ah.....got it. 

  • 0 in reply to   

    I have a plan to migrate to 15....but probably won't happen for a few months yet, waiting on some hew hardware.

  • 0 in reply to   

    This is the first I've seen about filesystem checks.  It makes sense to me....is there documentation on this somewhere?  I like to plan out all my steps ahead of time before I actually perform them.  Thanks