OpenText product name changes coming to the community soon! Learn more.

Wikis - Page

Consultants Corner: Quick Tip: Tuning Tips for SLES11 on VMWare running GroupWise

0 Likes
Many times I am asked for tuning tips/tricks for running GroupWise on SLES11 in a VMWare environment. But as I think about the wording, really there is little tuning I do for GroupWise on any Operating System. More likely, I will tune the operating system for VMWare. Mind you it seems every linux jock has got their own tips/tricks so while I am presenting some I have found to work well with no hit to GroupWise performance, its by no means the end all for SLES. Ok let's get to it.

First up installation of SLES11. Because I am installing SLES11 in a VMWare VM, I do it only one time. Then I make that tuned/updated SLES11 VM a VMWare Template. Once its a template, I can deploy from template as many SLES11 boxes as I need. This technically cannot be done for OES11 because of the eDirectory tie in. I say technically because OES11 has new feature set that allows for a similar option and SLES11 is the base for OES11. For OES11, what I do is install the SLES11 in a VM template, deploy from template, then install OES11 as needed.

Personal Note: While I know OES11 has many options, that can make administering GroupWise easier such as NSS, NCP and LDAP, I am of the camp that prefers to skip the overhead of OES11 for a GroupWise operating system. I prefer, recommend actually, SLES11. Windows, well it has overhead as well plus I am not sold on NTFS as the best file system for GroupWise. And if you are linux shy, no need to be. After a couple hours of training, and a small list of commands, you can manage Linux for GroupWise for 95% of your needs.


SLES11 Installation in VMWare VM:





  1. 2 CPUs

    1. Why 2? I want to lay down the multi-proc code should there be a future need. Once the VM is built, you can back down the cpu to 1.



  • 4GB of RAM

    1. Often this is overkill as well. But its a good starting point. Monitor the individual VM's for usage.



  • 25GB Disk

    1. This is for the Operating System only. Not for GroupWise data. And not for logs. Some customers have desired more here, up to 40GB. Again, 25GB seems about right.

  • Paravirtualized SCSI Controller

    1. Many pass this by. Its support for SLES11. It provides optimal performance as well as VMWare control.




    • GroupWise Disk

      1. GroupWise Data Disk is always a 2nd disk in a VM, unless its a test system. I want GroupWise Post Offices to be distributed onto separate VMDK's, that are on their own VMWare Datastore (NOT shared with anything else), on its own LUN.

    • Why? Best Performance. The more you place on a LUN, in general, the lower the performance potential. GroupWise Post Offices are heave disk I/O. And no there is no IOPS recommendation out there for GroupWise. Just make sure this data disk can provide 1000MB per second or faster. Using hdparm or TSATEST will give you that disk IO numbers.

    • Raw: Often I have heard of people wanting to use Raw disk for GroupWise rather than a VMDK on a Datastore. I say, NO! If you do that, you lose some of the benefits of VMWare. Storage vMotion is limited.

    • VMFS: The other rumor I have heard is VMFS does not perform well for GroupWise. To this I say: Bull! I have been doing GroupWise running on VMWare for several years. I have never seen a performance problem with VMFS.



    • 1GB NIC

      1. VMXNET3

        1. You want this option for optimal speed for the VM. And for VMWare control of the VM.


    • Of course, unless you can get faster.




    Partitions for the SLES11 Operating System




    1. Create new SWAP at approximately 2GB in size.

    • Create new /boot, EXT2 file system, 1GB in size.

      1. Why? No need for journaling on /boot and 1GB while big, is enough to not worry about future updates.


    • Create new / , EXT3 file system, use the remaining space.

      1. Optional: FSTAB NOATIME select


    • Optional: /VAR, EXT3 file system, 200 GB in size

      1. This does not usually apply to GroupWise, but does for GroupWise Mobility. So I added it here.




    A Base set of Software for SLES should be installed, this includes:



    1. Server Base Systematic

    • 32-Bit Runtime Environment (assumes 64Bit SLES 11 installation)

    • Help and Support Documentation

    • Minimal System (Appliances)

    • GNOME Desktop Environment

    • X Window Systematic

    • De-Select Print Server

      1. De-Select cups and Samba under the Details


    • C/C Complier Tools

      1. Check the entire box, you could do less like just Kernal Source and GCC only.


    • Optional: RDP

      1. Search for the "RDP" packages to install. This allows you to use an RDP client to remote to the server.


    • Optional: SAMBA which is Open Source.

      1. This would be used to manage GroupWise via ConsoleOne from a Windows workstation to the Linux EXT3 'volume' where a GroupWise domain exists. But for GroupWise 2014 not needed.


    • Optional: Java, Apache2 and Tomcat6

      1. If this will be a GroupWise Webserver for WebAccess


    • Ipv6 Off

    • Firewall Off

      1. I do not set firewalls on my GroupWise servers, at least on the start. You can come back latter and set the firewall and open ports as needed. No need to do it on the initial install.




    The rest of the installation involves your organization custom settings. IP Address, DNS servers, etc. I can tell you not doing DHCP is recommended. The rest depends. Here are some other tuning items for after the installation.

    Tuning SLES11 Server:



    1. Set Boot Loader

      1. elevator=noop

        1. Verify the scheduler:

      2. cat /sys/block/sda/queue/scheduler

    2. If [cfq] -then no noop

    • If not NOOP, then:

    • YaST, System, Boot Loader,

      1. In Section Management, Select Edit (on SUSE Linux Enterprise Server ) the default option, and on the Optional Kernel Command Line Parameter line

    • Enter 1 space then add:

    • elevator=noop

    • and then reboot server

    • **You do not need to repeat this for other disks on the server.







    • MAIN.CF

      1. /etc/postfix/main.cf

    • inet_interfaces=127.0.0.1

    • We do not want postfix using production IP addresses


    • Configure NTP

      1. If you have internal NTP great. Otherwise the GroupWise VM's will have to access external NTP servers


    • Edit HOSTS

      1. Even though I said NO to IPv6, I like to remark out IPv6 lines in the HOST file.


    • Tune GroupWise Disk

      1. Edit FSTAB

        1. Add: noatime

      2. Add: nodiratime

    • To the GroupWise disk, after the existing options for each partition.





    • Optional: Installing Apache2, Tomcat6 and Java

      1. On the SLES 11 box, Select Computer, YaST.

    • Then Select Software, Software Management

    • Select the Search tab, enter JAVA and Select Search button

    • Then Select "java-1_6_0-ibm"

    • Then in Search tab, enter Apache2, and Select Search button

    • Then Select "apache2"

    • Then in Search tab, enter Tomcat, and Select Search button

    • Then Select "tomcat6"

    • Next Select Accept

    • Close out of YaST



    • Optional: Configuring for GWHA

      1. In terminal, run 'useradd gwha' to create a local user for GWHA to run as

    • In terminal, run 'passwd gwha' to set the user's password to the one used above.

    • This adds the GWHA user to any GroupWise system deployed from this VM/Template.





    That is my quick tip on SLES11 tuning for a VMWare environment. As you will note, I did not install a lot of extra 'stuff' and I tried to keep it tight. Is there more I could have done, yes likely some of you will have tips as well. What I can tell you is I have used this same set of information for nearly 4 years of SLES in VMWare for GroupWise. Stability and performance have been exceptional.

    Marketing Moment: If you are new to Linux and need to move from NetWare, please drop me a line, I can make it painless and you will get the training you need to be successful.


    Got comments or article ideas? Need GroupWise help? Drop me a line at: Gregg@HinchmanConsulting.com. “The Force is strong in this one.”

    Labels:

    How To-Best Practice
    Comment List
    Parents
    • Hey Gregg,

      nice article. Please add to deselect appamor during software selection, this is always what i do. And, why not using lvm for partitioning the system disks? I prefer always to add a separate /var Partition. In all our sles setups most time we use the following setup:
      20 GB vmdk
      /boot 0.2G ext3 (or ext3, not realy infact because 0.2G and no changes during system runtime)
      volume group system with rest (19.8G)
      lv root 10G /ext3
      lv swap 2G Swap
      lv var 7,8G /ext3
      optional we extend the vmdk to 22G and add a seperate volume vor /tmp
      lv tmp 2G tmpfs

      Regards,
      Philipp
    Comment
    • Hey Gregg,

      nice article. Please add to deselect appamor during software selection, this is always what i do. And, why not using lvm for partitioning the system disks? I prefer always to add a separate /var Partition. In all our sles setups most time we use the following setup:
      20 GB vmdk
      /boot 0.2G ext3 (or ext3, not realy infact because 0.2G and no changes during system runtime)
      volume group system with rest (19.8G)
      lv root 10G /ext3
      lv swap 2G Swap
      lv var 7,8G /ext3
      optional we extend the vmdk to 22G and add a seperate volume vor /tmp
      lv tmp 2G tmpfs

      Regards,
      Philipp
    Children
    • Good call on AppArmor, it's part of my standard "removals" as well.

      As for the LVM, I can't speak for Gregg but I've never had good luck with LVM on any system be it a VM of physical server. It generally tends to add an un-necesarry layer of complexity. In a VM the hardware is presented directly which makes them very easy to set up as individual drives, and in a physical host it's really not that different. I've always found that when you use LVM you end up having to do a significant amount of work when something goes wrong (IE-drive death or adding additional space).
    Related
    Recommended