Cybersecurity
DevOps Cloud
IT Operations Cloud
|
ÂÂ
|
||||
|
Situation andtests performed. 4
Micro Focus Trademark Information. 11
One of the most common situations you face when working with an eDirectory solution that needs your objects to be constantly updated is:
Write speed is paramount so that LDAP, LDAPS, and/or Replica services are not affected, therefore in this document it is about the type of disk and array that should be used in order not to affect those services.
The following documento is based on an installation with the following products:
To learn more about possible eDirectory, please refer to the following document.
To know but of filesystem types used for eDirectory:
This document does not explain how to use the Azure platform or its implications to learn more about it please review the documentation provided by:
Description of the situation
During critical periods of information update, LDAP, LDAPS, and Replica services have been found to be affected or suspended as disk utilization is 100%.
This causes authentications to be stopped for several seconds or to fail by timeout. That causes the end user's displeasure.
To resolve this issue, the amount of IOPS (Input/Output Operations Per Second) isreviewed using the tools recommended by Microsoft:
But these prove that the disks if they give the speeds mentioned in the table
Disk Type |
Size |
IOPS |
Speed |
P10 |
128DiB |
500 (3,500) |
100 MB/second (170 MB/second) |
P30 |
1 TiB |
5,000 (30.00) |
200 MB/second (1,000 MB/second) |
This is because Azure disks are optimized for multiple write threads, but are not optimized for a single write thread.
To make the test as close to the write method as possible, we recommend that you use the command:
To learn more about testing disk types on cloud platforms
Proposed solution
The proposed solution is to use 5 P10 disks in a stripped-type array with Azure caches on write and read.
This solution has already been tested and documented in the following article:
Creating the fix
The following information is taken from the previous link.
Having obtained additional disks, we will proceed with the partitioning and creation of the PV and VG. These first steps will be the traditional form, paying special attention to the size of the Physical Extension, which by default is 4 MB. To create the partition, I have used parted, but it may be fdisk that I consider safer to use, if using parted I recommend extreme precautions:
NOTE: Run it as many times as disks that we will include in the new array.
Later you need to configure the new partitions as PV:
Where: X, W and Z correspond to the different discs to be included in the VG.
Now it's time to create the VG, to achieve it just run:
Where: <vg-name> will be the name of the volume group that is deemed appropriate.
Once this point is reached we will proceed to create the LV with stripe mapping, this is the key point for our arrangement, as it will run:
Where:
-i 5: Indicates the number of stripes (PVs) in which the information will be stored, which depends on the PVs that are part of the VG.
-I4MB: Defines the size of each stripe/fragment of information, which will be stored sequentially in each PV, here it is important to mention that the maximum size of each fragment cannot exceed the SIZE of the PE (default 4 MB). If not set, the default size will be 64KB for each shard. In the actual case, the maximum allowed by the PE configuration was configured since the DIB is quite large. . Using a tag notation that is conveniently more appropriate and convenient for this case. .
Finally, the LV will be checked with the stripe assignment, so it will run:
At this point, we have successfully created an LV in Stripe mode!!
XFS creation
Another recommendation of eDirectory documentation is the type of file system, in previous versions was ReiserFS, however, for the present and the future this is increasingly deprecated in favor of more efficient solutions, one of them is XFS, which is compatible with eDirectory 9.x, as in SLES 15.
This file system is optimized for large files, has parallelization properties, as well as being a journaling file system, supports hot size so it is ideal for use with LVM, is a high-performance file system and has its own set utilities.
Here the only point to consider here is the size of the block used in the eDirectory FLAIM engine, this by default uses 4KB blocks, however, when configuring it for the first time it can be customized to 8KB, so the block size used by FLAIM and the one configured on the file system must be the same.
A good news is that the default size of each block in XFS is also 4KB, so if you haven't changed this parameter at the time you configure your eDirectory instance, no additional configuration is required to create the file system.
Just run it directly:
At this point we already have our LV in Stripe mode with XFS ready for use!!!
MIGRATION TO THE NEW LOGICAL VOLUME.
For this step we must take extreme precautions as incorrect execution can seriously and irreparably affect the IUD.
To migrate the DIB from one LV to another, I used the rsync command that performs the necessary checks to ensure the integrity of the files, however, it is possible to use other utilities, since the bravest is the cp command.
Procedure:
NOTE: It is important to place a diagonal at the end of the source directory name, as this will cause rsync to copy the contents of the directory and write it to the new location, if this diagonal is omitted from the directory and its contents will be copied to the destination.
montar /dev/<vg-name>/<lv-name> /PATH/TO/EDIR/DIB
NOTE1: Do not delete the original DIB until you confirm that the service is running stable and error-free.
NOTE2: It is important to note that the more time passes, the less useful the original DIB will be, because it will be outdated from the rest of the replicas in the ring.
Update the mount point reference for the eDirectory file system in /etc/fstab, taking care to place the correct reference and file system type.
Tests performed.
For the tests that were performed:
Control test
Write test dd command:
/home/novell # dd if=/dev/zero of=/mnt/iotest.log bs=64k count=8k conv=fdatasync
8192 0 records in
8192 0 records out
536870912 bytes (537 MB, 512 MiB) copied, 5.685 s, 94.4 MB/s
Proof of writing objects tothe directory
50,000, 1 modified attribute 1 authentication per second, 1 P10, azure cache read on and write on |
|
Duration of freezing in seconds |
Total execution time mm: ss |
28, 12,* |
8m31s |
26, 13, * |
8m24s |
24, 15, * |
8m26s |
Stripped test
Write test dd command:
/home/novell # dd if=/dev/zero of=/nds/iotest.log bs=64k count=8k conv=fdatasync
8192 0 records in
8192 0 records out
536870912 bytes (537 MB, 512 MiB) copied, 0.429201 s, 1.3 GB/s
50,000, 1 modified attribute. 5 disk arrange, 1 authentication per second P10, azure chace read on and write on
|
|
Duration of freezing in seconds |
Total execution time mm: ss |
1.546s |
3m21.752s |
1.604s |
2m40.691s |
1.425s |
2m55.051s |
Conclusion
Using the stripped array s in conjunction with the caches on shows an undeniable improvement. But this improvement should always be accompanied by backups via dsbk or azure images.
It is always important to validate that Azure caches are turned on.
Micro Focus Trademark Information
MICRO FOCUS and the Micro Focus logo, among others, are trademarks or registered trademarks of Micro Focus (IP) Limited or its subsidiaries in the United Kingdom, United States and other countries. All other marks are the property of their respective owners.
Company name: Micro Focus International plc
Place of registration: England and Wales
Registered number: 5134647
Registered address: The Lawn, 22-30 Old Bath Road, Berkshire, RG14 1Q