OpenText product name changes coming to the community soon! Learn more.

Wikis - Page

What is Thin Provisioning? How does it help a customer?

1 Likes

Today’s Traditional Storage Problems:

Traditional storage thick provisioning introduces a lot of storage waste. Our customers may have 30% to 50% of their allocated space unused. For every 10 TB purchased, 3 to 5 TB are left unused because of inefficient provisioning. Which means customers spend more in buying disks and systems to keep the business running, because customers cannot access their stranded storage. Thin provisioning overcomes these problems, improving storage utilization for our customers.

Provisioning Types:

Thick Provisioning vs Thin Provisioning.png Thick Provisioning:

  • A thick-provisioned disk is the one for which the entire space for the disk is per-allocated, prior to use.
  • Even if no data is written to disk, the occupied physical storage cannot be used for anything else.
  • This means that creating a 10GB NSS pool created on a thick-provisioned disk with 1GB of files copied to it, will actually occupy 10GB of physical disk space.

Deployment Scenario 1:

  • Assign one 10GB thick disk to OES VM guest.
  • Verify device added is thick provisioning

[root@OESVMGUEST: nlvm list devices more

Name   Size      Used      Free  Format Shared RAID  Sync  Maj:Min  SectorSize(logical/physical)  Provisioning

sdg    10.00GB   9.99GB    0KB    GPT   No    No           8:96             512b/512b                                        Thick

  • The following are the mapped blocks after a 10GB thick device is initialized :

[root@OESVMGUEST : sg_get_lba_status /dev/sdi

descriptor LBA: 0x0000000000000000  blocks: 20971520  mapped (or unknown)

  • ESX Datastore size initially:

[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_7-flat.vmdk

10.0G   oes_7-flat.vmdk                           (Entire 10G space is pre-allocated before use)

  • Creating a NSS pool of size 10GB on thick device:

[root@OESVMGUEST :

/dev/pool/THICK_POOL     nsspool    10G   73M   10G   1% /opt/novell/nss/mnt/.pools/THICK_POOL

               THICK_VOL                             nssvol     10G  592K   10G   1% /media/nss/THICK_VOL

[root@OESVMGUEST : sg_get_lba_status /dev/sdi

descriptor LBA: 0x0000000000000000  blocks: 20971520  mapped (or unknown)

  • Copying files of size 1GB inside NSS volume THICK_VOL:

[root@OESVMGUEST :

/dev/pool/THICK_POOL     nsspool    10G  1.1G  9.0G  11% /opt/novell/nss/mnt/.pools/THICK_POOL

THICK_VOL                             nssvol     10G  978M  9.0G  10% /media/nss/THICK_VOL

  • ESX Datastore size :

[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_7-flat.vmdk

10.0G   oes_7-flat.vmdk

 It can be observed that even though only 1GB file is copied inside NSS volume created on a pool created on a thick device, datastore still shows 10GB of physical disk space used.

  • Executing fstrim on NSS pool created on thick device :

[root@OESVMGUEST:  fstrim -v /opt/novell/nss/mnt/.pools/THICK_POOL

fstrim: /opt/novell/nss/mnt/.pools/THICK_POOL: the discard operation is not supported

Since fstrim can’t trim the unused blocks present on a pool created on thick devices, device space is wasted. Usually, it turns out that a lot of free disk space is not in use. And this is where thin provisioning kicks in.

Thin Provisioning:

  • A thin-provisioned disk or LUN is one for which storage is not reserved in advance.
  • Instead, storage is allocated dynamically, as more data is written to disks.
  • Free space is released back to the storage system when data in the disk or LUN is deleted.
  • This means that creating a new 10GB NSS pool created on a thin-provisioned disk with 1GB of files copied to it, will actually occupy 1GB of physical disk space. Rest of the 9GB is available as a free tree for new NSS pools creation.

Deployment Scenario 2:

  • Assign one 10GB thin disk to OES VM guest.
  • Verify device added is thin provisioning :

[root@OESVMGUEST: nlvm list devices more

Name   Size      Used      Free  Format Shared RAID  Sync  Maj:Min  SectorSize(logical/physical)  Provisioning

sdh    10.00GB   9.99GB    0KB    GPT   No    No           8:112             512b/512b                                        Thin

  • ESX Datastore size initially, execute this command on ESX host hosting OES VM guest:

[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk

0             oes_8-flat.vmdk

After disk is initialize on OES VM guest :

[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk

2.0M      oes_8-flat.vmdk                  (2M is allocated as data is written to thin disk)

  • Creating a NSS pool of size 10GB on thin device:

[root@OESVMGUEST:

/dev/pool/THIN_POOL      nsspool    10G   73M   10G   1% /opt/novell/nss/mnt/.pools/THIN_POOL

THIN_VOL                              nssvol     10G  592K   10G   1% /media/nss/THIN_VOL

ESX Datastore size :

 [root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk

8.0M      oes_8-flat.vmdk                                      (8.0M is allocated as data is written to thin disk)

  • Copying files of size 1GB inside NSS volume THIN_VOL:

[root@OESVMGUEST:

/dev/pool/THIN_POOL      nsspool    10G  1.1G  9.0G  11% /opt/novell/nss/mnt/.pools/THIN_POOL

THIN_VOL                              nssvol     10G  978M  9.0G  10% /media/nss/THIN_VOL

  • ESX Datastore size is increased to ~1GB after copying file of size 1GB:

[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk

984.0M            oes_8-flat.vmdk

It can be observed that after copying 1GB file inside NSS volume created on a pool created on a thin device, datastore has occupied only 984M of physical disk space used. Rest of the space can still be used to create new pools.

  • Delete file file1 present inside NSS volume :

[root@OESVMGUEST: rm –rf /media/nss/THIN_VOL/file1

  • Executing fstrim on NSS pool created on thin device :

[root@OESVMGUEST: fstrim -v /opt/novell/nss/mnt/.pools/THIN_POOL

/opt/novell/nss/mnt/.pools/THIN_POOL: 976.8 MiB (1024278528 bytes) trimmed

ESX Datastore size :

[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk

8.0M            oes_8-flat.vmdk                       (Free space is released back to storage system)

  • The following display how the mapped and unmapped number of blocks get allocated and deallocated dynamically when files are copied inside NSS volume and  trimmed using fstrim post deletion of those files present inside NSS volume :

[root@OESVMGUEST: sg_get_lba_status /dev/sdh -m 1000

descriptor LBA: 0x0000000000000000  blocks: 4096  mapped (or unknown)

descriptor LBA: 0x0000000000001000  blocks: 30720  deallocated

descriptor LBA: 0x0000000000008800  blocks: 395264  mapped (or unknown)

descriptor LBA: 0x0000000000069000  blocks: 583680  deallocated

descriptor LBA: 0x00000000000f7800  blocks: 1607680  mapped (or unknown)

descriptor LBA: 0x0000000000280000  blocks: 7862272  deallocated

descriptor LBA: 0x00000000009ff800  blocks: 2048  mapped (or unknown)

descriptor LBA: 0x0000000000a00000  blocks: 10479616  deallocated

descriptor LBA: 0x00000000013fe800  blocks: 6144  mapped (or unknown)

descriptor LBA: 0x0000000001400000  blocks: 0  mapped (or unknown)

Since fstrim can trim the unused blocks present on a NSS pool created on a thin device, deleting file1 and fstrim on an nss pool created on a thin device returned back occupied ~1 GB (976MB) storage to the datastore. And datastore size is reduced back to original size to 8.0M as described in above step. This available size can be used to create a fresh pool on a thin device. Thin Provisioning provided a new approach to storage provisioning – one that could eliminate the wasted capacity.

OES NSS support to Discard Unused Blocks:

 Beginning with OES 2018 SP2, OES supports thin provisioning on a storage device, which allows you to discard (or trim) blocks that are not in use by the pool. The fstrim command allows you to trim the unused blocks in the NSS pool, Only purgeable blocks are discarded. fstrim is supported only on thin-provisioned SCSI devices with VMware ESXi on a linear target

1. Using fstrim command to trim unused blocks:

Using fstrim command on NSS pool created on thin and thick provisioned devices :

# fstrim -v /opt/novell/nss/mnt/.pools/THICKPOOL

fstrim: /opt/novell/nss/mnt/.pools/THICKPOOL: the discard operation is not supported

# fstrim -v /opt/novell/nss/mnt/.pools/THINPOOL

/opt/novell/nss/mnt/.pools/THINPOOL: 98.2 MiB(103006208 bytes) trimmed 

2. NLVM fstrim Plug-in:

Trims the unused blocks in the NSS pool named THINPOOL and then displays the trimmed size in bytes

# nlvm fstrim pool thinpool         (by default nlvm trim unused blocks synchronously)

THINPOOL: 98.00MB (102760448 bytes) trimmed

Displays the fstrim state, last executed time, and last trimmed size on the specified pool

# nlvm fstrim status thinpool

Status: Not running

Last execution time: Mon Nov 25 21:29:09 2019

Last trimmed size: 98.00MB (102760448 bytes

3.NSSMU fstrim Plug-in: 

New fstrim option (T) is added in NSSMU to trim the unused blocks on the selected NSS pool. Executing this option allows the fstrim process to run in the background. For more information on fstrim, see /var/log/messages.

Go to the pool information page to view the fstrim state, last executed time, and last trimmed size on the selected pool. The fstrim status plug-in is supported only for thin-provisioned NSS pools.

4. iManager fstrim Plug-in: 

New fstrim option is added in iManager Storage Pool plugin to trim the unused blocks on the selected NSS pool. Executing this option allows the fstrim process to run in the background. For more information on fstrim, see /var/log/messages.

Go to the pool information page to view the fstrim state, last executed time, and last trimmed size on the selected pool. The fstrim status plug-in is supported only for thin-provisioned NSS pools.

5. Blk_discard_on_delete switch in nlvm.conf file:

A new option “Blk_discard_on_delete on” is added in the /etc/opt/novell/nss/nlvm.conf file to discard the blocks on a thin-provisioned storage disk when you delete a pool created on that thin provisioned-disk.

# Blk_discard_on_delete will discard blocks on a thin provisioned disk

# when a partition on it is deleted.

# Blk_discard_on_delete is off. Uncomment the command to turn it on.

# Blk_discard_on_delete  on

 Deployment Scenario using Blk_discard_on_delete:

  • Assign one 10GB thin disk to OES VM guest.

[root@OESVMGUEST : nlvm list devices more

Name   Size      Used      Free  Format Shared RAID  Sync  Maj:Min  SectorSize(logical/physical)  Provisioning

sdh    10.00GB   9.99GB    0KB    GPT   No    No           8:112             512b/512b                                        Thin

  • ESX Datastore size initially:

[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk

0                  oes_8-flat.vmdk

After disk is initialize :

[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk

2.0M            oes_8-flat.vmdk

  • Creating a NSS pool of size 10GB on thin device:

[root@OESVMGUEST :

/dev/pool/THIN_POOL      nsspool    10G   73M   10G   1% /opt/novell/nss/mnt/.pools/THIN_POOL

THIN_VOL                              nssvol     10G  592K   10G   1% /media/nss/THIN_VOL

After NSS Pool is created on thin device:

[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk

8.0M            oes_8-flat.vmdk

  • Copying files of size 1GB inside NSS volume THIN_VOL:

[root@OESVMGUEST :

/dev/pool/THIN_POOL      nsspool    10G  1.1G  9.0G  11% /opt/novell/nss/mnt/.pools/THIN_POOL

THIN_VOL                              nssvol     10G  978M  9.0G  10% /media/nss/THIN_VOL

  • ESX Datastore size is increased to ~1GB after copying file of size 1GB:

[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h oes_8-flat.vmdk

984.0M            oes_8-flat.vmdk

  • Uncommented Blk_discard_on_delete  on in /etc/opt/novell/nss/nlvm.conf file

 [root@OESVMGUEST :

   Blk_discard_on_delete       on

  • Deleting NSS pools(using NSSMU) which are no longer needed would trim the unused blocks automatically.

After the NSS pool created on thin device is deleted, ESX datastore size is returned back to original size as describe in above steps :

[root@ESXHOST:/vmfs/volumes/datastore2/oes] du -h blr8-110-171-RC2_8-flat.vmdk

2.0M    oes_8-flat.vmdk

 

How NSS Thin Provisioning helps Customers…

Implementing thin provisioning at customer sites can save stranded capacity by improving storage utilization rates :

  • Traditional thick provisioning results in poor storage utilization and wastes money, since customers end up buying more storage to make the system and business running as described in Deployment Scenario 1.
  • Thin provisioning allows storage to be provisioned but not consumed until data is written. Which helps customers to save storage and use that unused storage for other NSS pools creation as described in Deployment Scenario 2. 
  • Thin Provisioning improves storage utilization for our customers.
  • Address situations where our customer states they need a lot of disk space for their VM, but might end up using only a portion of that disk space.

I hope it will help to give basic understanding about thick/thin provisioning and NSS thin provisioning deployment scenarios to address customer stranded storage issues. Thin provisioning advantages over thick provisioning and how customers can save efficiency for their storage after implementing the OES NSS Thin provisioning feature.

Thanks!

Labels:

How To-Best Practice
New Release-Feature
Support Tips/Knowledge Docs
Support Tip
Comment List
  • Thank you very much for sharing this information on the new fstrim feature. Am I correct that this will work on VMware storage only currently with OES2018 SP2? Are there plans to extend this feature to more storage providers in the future? Any updates on OES2018 SP3, for example?

  • Hi KonecnyA, kboyle and girishks,

    Yes,  please do consider it as "deleted but not purged".

    "If a file is deleted but not purged, the file blocks are considered to be in use by NSS, hence they are not discarded".

     ,  yes if salvage is enabled on volume, thin provisioning may not provide benefits until files are purged or if auto-purge happens on reaching the pool watermark.

    Thanks,

    Acshai.

  • "did we mean salvagable (vs salvaged/undeleted)? i.e. deleted but not yet purged?"

    Correct. The blocks representing deleted files but not yet purged are NOT discarded.

     

     fstrim would release the storage for deleted files that are already purged on a volume having salvage enabled.

     

    Thank you   and   for the feedback. We will review and include your inputs in the subsequent releases.

  • "If a file is salvaged but not purged"

    did we mean salvagable (vs salvaged/undeleted)? i.e. deleted but not yet purged?

    If we could routinely purge based on length of time since deleted (eg, things over 42 days or so)  then this might really shine. But I haven't seen that ability since NetWare days (set because guys setting up SolarWinds couldn't get the difference between free space and used space sorted out).

    I can see that perhaps a script that uses
      nss /LogicalVolumePurgeDelay=value
    setting it to a month or so, run a purge, then back to the default of 4 days or such as appropriate.     hmmm,  something to play with.

     

    ________________________

    Andy of KonecnyConsulting.ca in Toronto
    Please use the "Like" and/or "Verified Answers" as appropriate as that helps us all.

  •  Thank you for that clarification.

    If I understand you correctly, volumes having salvage enabled will continue to consume storage until all available storage has been exhausted so, in this situation, thin provisioning may not provide all the benefits one might expect...

    __________
    Kevin Boyle, SuperUser

    Calgary, Alberta, Canada

  • Hi KBoyle,

         Thanks for your comment.

         The fstrim command allows you to trim the unused blocks in the NSS pool, Only purgeable blocks are discarded. If a file is salvaged but not purged, the file blocks are considered to be in use by NSS, hence they are not discarded.

     

     

  • This is a great explanation of how thin provisioning works but how does it impact one's ability to salvage files?

    As files are added, then deleted, the the amount of storage consumed will continue to increase as the deleted files still occupy storage in case they need to be salvaged. Over time the amount of storage consumed will increase until it reaches the maximum available to the volume or until the storage assigned to the pool is exhausted.

    I assume at this point that new files will be allocated storage currently occupied by deleted files as is done when thick provisioning is used but if some storage needs to be reclaimed and fstrim is used will that not free all storage not currently allocated such that it will be impossible to salvage any deleted file?

    __________
    Kevin Boyle, SuperUser

    Calgary, Alberta, Canada

Related
Recommended