4 min read time

The Importance of Intelligent Backup Integration Agents

by   in Portfolio

There is a discussion about agent vs agent-less backup integrations going on. Some think agent-less is simpler, therefore “better”. Let’s think again …

First of all, there will be an agent somewhere. Agent-less usually refers to the fact there is no locally installed agent on the backup client. But the functionality still needs to be provided somehow. Read also Data Protector datasheet.

Agent-less backup approaches actually have some advantages: You don’t need to deploy an agent on a client therefore you only need to maintain it in a central location. They are also slim enough addressing cloud workload protection since you won’t usually deploy 3rd party software into a container.

Agent-based approaches have the major advantage of bringing local intelligence to a large variety of data types hosted in file systems and applications. For example, some file systems or applications support clustering and/or replication. An agent can “sense” this behaviour and act appropriately, like redirecting backup/restore I/O to a cluster-partner without any need for reconfiguration. And once a fail-back happened you would expect backup/recovery to go back to normal, right?

Some applications offer the capability of primary and secondary replica copies. Local agents can make use of that in form of doing backups from the secondary copy which doesn’t put load on the primary during backup. This needs agents being installed on all cluster/replica members, also sometimes in a so-called cluster-aware mode.

Agent-less approaches are usually snapshot driven, since enterprise-grade backup device and media management is missing. But, as we all know, snapshots are no independent copies of your original data. Therefore, a snapshot is not a backup! If the original is lost, so are the snapshots. As a side-effect, with every snapshot your storage performance becomes lower and capacity is taken away. As result you may have to sacrifice on the number of restore points.

And let’s not underestimate restore features: They allow for redirection of a restore to a compatible backup client. This could be due to the fact the original client is still under repair or you just want to test/scan the data in a clean room or sandbox environment before moving it back into production. RPO rules and other business process may dictate that you need to restore an app to a certain point-in-time or backup log sequence number, not just “last night”.

Another way of completely removing backup I/O from a system is Zero DownTime Backup (ZDB) which is an integrated combination of filesystem and application backup with Storage System mirroring and replication technology. DP agents would handle files and apps (engage backup mode) then create a data consistent snapshot. Files and apps are brought out of backup mode immediately removing and stress on the backup client. Backup is now done by presenting the snapshot to a dedicated backup server taking over all I/O burden. Watch our on-demand webinar 'What to Look for in the Latest Backup Technologies' to learn more!

Data Protector agents also allow backup/restore streams directly from the backup client to a backup device, for instance a dedupe appliance or tape library (Yes, tape! Gives you true air-gapping and best single stream performance …read also '10 Reasons Why Tape Is Still Alive and Kicking for Data Backup and Restore'). With this principle you can avoid LAN backups as well since you may not know it’s current bandwidth or the fact it is still secure.

You need to redirect a backup/restore via another route for some reason? No problem since any DP Media Agent can server as data mover. This is extremely useful if one or many systems are down and the original data route doesn’t exist for a while.

Oh, you want to move data into the cloud or within a cloud? Data Protector makes use of AWS S3 (and compatible) as well as Azure BLOB. So how about using these as primary backup targets and have the choice of creating a copy either in another cloud or to an on-premise backup device allowing for local control and restorability.

Most agent-less approaches don’t provide much backup/restore granularity and flexibility. But these features can be lifesaving in a disaster situation. It is less about “can you protect this app” it is about “how good can you protect this app”. Now, are agent-less approaches bad? No, not at all. A good data protection strategy just needs the capability being combined agent-based and agent-less approaches where needed. The right mixture is key and restore capabilities should be the driving force for a decision.

Summary

  • Use an agent-based approach for mission-critical data.
  • Go with restore capabilities first, adapt backup accordingly.
  • Use an agent-less approach when an agent cannot be deployed.
  • Check for enterprise-grade features needed.

Resources

General Data Protector information, landing page:

Enterprise Data Backup Software: Data Protector | OpenText (microfocus.com)

Data Protector Practitioner Portal, includes support matrixes:

Home - Data Protector (microfocus.com)


Be sure to connect with OpenText on LinkedIn.

Keep up with the latest Tips & Info about Data Protector. 

Do you have an idea or Product Enhancement Request about Data Protector? Submit it in the Data Protector Idea Exchange. 

We’d love to hear your thoughts on this blog. Comment below.

The Online Community Team

Labels:

Data Protection
Information Management