Cybersecurity
DevOps Cloud
IT Operations Cloud
OpenText product name changes coming to the community soon! Learn more.
Today’s Traditional Storage Problems:
Traditional storage thick provisioning introduces a lot of storage waste. Our customers may have 30% to 50% of their allocated space unused. For every 10 TB purchased, 3 to 5 TB are left unused because of inefficient provisioning. Which means customers spend more in buying disks and systems to keep the business running, because customers cannot access their stranded storage. Thin provisioning overcomes these problems, improving storage utilization for our customers.
Provisioning Types:
Thick Provisioning:
Deployment Scenario 1:
[root@OESVMGUEST: nlvm list devices more
Name Size Used Free Format Shared RAID Sync Maj:Min SectorSize(logical/physical) Provisioning
sdg 10.00GB 9.99GB 0KB GPT No No 8:96 512b/512b Thick
[root@OESVMGUEST : sg_get_lba_status /dev/sdi
descriptor LBA: 0x0000000000000000 blocks: 20971520 mapped (or unknown)
[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_7-flat.vmdk
10.0G oes_7-flat.vmdk (Entire 10G space is pre-allocated before use)
[root@OESVMGUEST :
/dev/pool/THICK_POOL nsspool 10G 73M 10G 1% /opt/novell/nss/mnt/.pools/THICK_POOL
THICK_VOL nssvol 10G 592K 10G 1% /media/nss/THICK_VOL
[root@OESVMGUEST : sg_get_lba_status /dev/sdi
descriptor LBA: 0x0000000000000000 blocks: 20971520 mapped (or unknown)
[root@OESVMGUEST :
/dev/pool/THICK_POOL nsspool 10G 1.1G 9.0G 11% /opt/novell/nss/mnt/.pools/THICK_POOL
THICK_VOL nssvol 10G 978M 9.0G 10% /media/nss/THICK_VOL
[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_7-flat.vmdk
10.0G oes_7-flat.vmdk
It can be observed that even though only 1GB file is copied inside NSS volume created on a pool created on a thick device, datastore still shows 10GB of physical disk space used.
[root@OESVMGUEST: fstrim -v /opt/novell/nss/mnt/.pools/THICK_POOL
fstrim: /opt/novell/nss/mnt/.pools/THICK_POOL: the discard operation is not supported
Since fstrim can’t trim the unused blocks present on a pool created on thick devices, device space is wasted. Usually, it turns out that a lot of free disk space is not in use. And this is where thin provisioning kicks in.
Thin Provisioning:
Deployment Scenario 2:
[root@OESVMGUEST: nlvm list devices more
Name Size Used Free Format Shared RAID Sync Maj:Min SectorSize(logical/physical) Provisioning
sdh 10.00GB 9.99GB 0KB GPT No No 8:112 512b/512b Thin
[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk
0 oes_8-flat.vmdk
After disk is initialize on OES VM guest :
[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk
2.0M oes_8-flat.vmdk (2M is allocated as data is written to thin disk)
[root@OESVMGUEST:
/dev/pool/THIN_POOL nsspool 10G 73M 10G 1% /opt/novell/nss/mnt/.pools/THIN_POOL
THIN_VOL nssvol 10G 592K 10G 1% /media/nss/THIN_VOL
ESX Datastore size :
[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk
8.0M oes_8-flat.vmdk (8.0M is allocated as data is written to thin disk)
[root@OESVMGUEST:
/dev/pool/THIN_POOL nsspool 10G 1.1G 9.0G 11% /opt/novell/nss/mnt/.pools/THIN_POOL
THIN_VOL nssvol 10G 978M 9.0G 10% /media/nss/THIN_VOL
[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk
984.0M oes_8-flat.vmdk
It can be observed that after copying 1GB file inside NSS volume created on a pool created on a thin device, datastore has occupied only 984M of physical disk space used. Rest of the space can still be used to create new pools.
[root@OESVMGUEST: rm –rf /media/nss/THIN_VOL/file1
[root@OESVMGUEST: fstrim -v /opt/novell/nss/mnt/.pools/THIN_POOL
/opt/novell/nss/mnt/.pools/THIN_POOL: 976.8 MiB (1024278528 bytes) trimmed
ESX Datastore size :
[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk
8.0M oes_8-flat.vmdk (Free space is released back to storage system)
[root@OESVMGUEST: sg_get_lba_status /dev/sdh -m 1000
descriptor LBA: 0x0000000000000000 blocks: 4096 mapped (or unknown)
descriptor LBA: 0x0000000000001000 blocks: 30720 deallocated
descriptor LBA: 0x0000000000008800 blocks: 395264 mapped (or unknown)
descriptor LBA: 0x0000000000069000 blocks: 583680 deallocated
descriptor LBA: 0x00000000000f7800 blocks: 1607680 mapped (or unknown)
descriptor LBA: 0x0000000000280000 blocks: 7862272 deallocated
descriptor LBA: 0x00000000009ff800 blocks: 2048 mapped (or unknown)
descriptor LBA: 0x0000000000a00000 blocks: 10479616 deallocated
descriptor LBA: 0x00000000013fe800 blocks: 6144 mapped (or unknown)
descriptor LBA: 0x0000000001400000 blocks: 0 mapped (or unknown)
Since fstrim can trim the unused blocks present on a NSS pool created on a thin device, deleting file1 and fstrim on an nss pool created on a thin device returned back occupied ~1 GB (976MB) storage to the datastore. And datastore size is reduced back to original size to 8.0M as described in above step. This available size can be used to create a fresh pool on a thin device. Thin Provisioning provided a new approach to storage provisioning – one that could eliminate the wasted capacity.
OES NSS support to Discard Unused Blocks:
Beginning with OES 2018 SP2, OES supports thin provisioning on a storage device, which allows you to discard (or trim) blocks that are not in use by the pool. The fstrim command allows you to trim the unused blocks in the NSS pool, Only purgeable blocks are discarded. fstrim is supported only on thin-provisioned SCSI devices with VMware ESXi on a linear target
1. Using fstrim command to trim unused blocks:
Using fstrim command on NSS pool created on thin and thick provisioned devices :
# fstrim -v /opt/novell/nss/mnt/.pools/THICKPOOL
fstrim: /opt/novell/nss/mnt/.pools/THICKPOOL: the discard operation is not supported
# fstrim -v /opt/novell/nss/mnt/.pools/THINPOOL
/opt/novell/nss/mnt/.pools/THINPOOL: 98.2 MiB(103006208 bytes) trimmed
2. NLVM fstrim Plug-in:
Trims the unused blocks in the NSS pool named THINPOOL and then displays the trimmed size in bytes
# nlvm fstrim pool thinpool (by default nlvm trim unused blocks synchronously)
THINPOOL: 98.00MB (102760448 bytes) trimmed
Displays the fstrim state, last executed time, and last trimmed size on the specified pool
# nlvm fstrim status thinpool
Status: Not running
Last execution time: Mon Nov 25 21:29:09 2019
Last trimmed size: 98.00MB (102760448 bytes
3.NSSMU fstrim Plug-in:
New fstrim option (T) is added in NSSMU to trim the unused blocks on the selected NSS pool. Executing this option allows the fstrim process to run in the background. For more information on fstrim, see /var/log/messages.
Go to the pool information page to view the fstrim state, last executed time, and last trimmed size on the selected pool. The fstrim status plug-in is supported only for thin-provisioned NSS pools.
4. iManager fstrim Plug-in:
New fstrim option is added in iManager Storage Pool plugin to trim the unused blocks on the selected NSS pool. Executing this option allows the fstrim process to run in the background. For more information on fstrim, see /var/log/messages.
Go to the pool information page to view the fstrim state, last executed time, and last trimmed size on the selected pool. The fstrim status plug-in is supported only for thin-provisioned NSS pools.
5. Blk_discard_on_delete switch in nlvm.conf file:
A new option “Blk_discard_on_delete on” is added in the /etc/opt/novell/nss/nlvm.conf file to discard the blocks on a thin-provisioned storage disk when you delete a pool created on that thin provisioned-disk.
# Blk_discard_on_delete will discard blocks on a thin provisioned disk
# when a partition on it is deleted.
# Blk_discard_on_delete is off. Uncomment the command to turn it on.
# Blk_discard_on_delete on
Deployment Scenario using Blk_discard_on_delete:
[root@OESVMGUEST : nlvm list devices more
Name Size Used Free Format Shared RAID Sync Maj:Min SectorSize(logical/physical) Provisioning
sdh 10.00GB 9.99GB 0KB GPT No No 8:112 512b/512b Thin
[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk
0 oes_8-flat.vmdk
After disk is initialize :
[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk
2.0M oes_8-flat.vmdk
[root@OESVMGUEST :
/dev/pool/THIN_POOL nsspool 10G 73M 10G 1% /opt/novell/nss/mnt/.pools/THIN_POOL
THIN_VOL nssvol 10G 592K 10G 1% /media/nss/THIN_VOL
After NSS Pool is created on thin device:
[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk
8.0M oes_8-flat.vmdk
[root@OESVMGUEST :
/dev/pool/THIN_POOL nsspool 10G 1.1G 9.0G 11% /opt/novell/nss/mnt/.pools/THIN_POOL
THIN_VOL nssvol 10G 978M 9.0G 10% /media/nss/THIN_VOL
[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk
984.0M oes_8-flat.vmdk
[root@OESVMGUEST :
Blk_discard_on_delete on
After the NSS pool created on thin device is deleted, ESX datastore size is returned back to original size as describe in above steps :
[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h blr8-110-171-RC2_8-flat.vmdk
2.0M oes_8-flat.vmdk
How NSS Thin Provisioning helps Customers…
Implementing thin provisioning at customer sites can save stranded capacity by improving storage utilization rates :
I hope it will help to give basic understanding about thick/thin provisioning and NSS thin provisioning deployment scenarios to address customer stranded storage issues. Thin provisioning advantages over thick provisioning and how customers can save efficiency for their storage after implementing the OES NSS Thin provisioning feature.
Thanks!