This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Key benefits using Rapid Recovery Agent vs Agentless backup

Agent-based backups

The backup driver resides at the operating system kernel level.

Agent-based backup offers a range of advantages:

1. Application awareness — Built for specific applications, agents enable very granular recovery.

2. Security — Once installed, the agent has access to the operating system and application. There’s no need to
    store privileged username/password information on a potentially insecure backup server.

3. Breadth — Agents can protect both physical and virtual machines. Great for legacy workloads.

4. Reliability — With no single point of failure, an agent continues to run even if a backup management console
    goes offline.

5. Local — Agents can boost backup performance by pre-processing and compressing data locally.

 

Agentless backups

Agentless backups rely on an external system to manage and perform the backup operations for each source
system. In theory, all processing, storage, replication, and other functions happen on, or are managed by, the host
or the central backup controller.

1. Leading-edge capability — Agentless backup is built from the ground up for virtual environment like VMware

2. Simplicity — Agentless backup doesn’t require installing agents across dozens or hundreds of virtual machines. It also
    doesn’t require rebooting each host after agent installation.

3. Affordability — There’s no agent fee for each virtual machine.

4. Performance — Though the host still performs the processing, it incurs less CPU, memory, and I/O impact.

 

Parents
  • I would like to add to the conversation.
    I order to be able to use the Live-Recovery feature as intended, it is important to invest in the right type of hardware for the Core Server, the machine to be restored and the networking components.

    The live recovery performance depends first and foremost on the IOPS available both on the Core Server and on the machine to be restored. Networking comes in a second then, Memory and CPU hold the third place.

    Restoring Data from a repository containing many incrementals requires sustained random reads (and writes to the target). Please note that these I/O operations are performed with 8KB blocks while Windows works normally with 64KB blocks and as such the storage system needs to be able to compensate for the marginal loss of performance.

    If the Core server is performing in parallel additional operations, such as backing up other machines, running rollups, mountability and attachability tests etc., the amount of processed data may bring the storage system capabilities close to their maximum. If additional user data is circulated (the expected result of live recovery), the latency of the I/O request response will increase.

    Over the years I have performed a few restores using the Live-Recovery feature. It worked great in relatively small environments with SQL or Exchange databases of a few GB.

    In one case, when a very large environment had to be recovered, by restoring a 38 volumes high load SQL server, the response was so slow that the users could not really take advantage of the live-recovery feature (imagine a graphics heavy website over a slow cellular connection; how long until you give up?).

    On the positive side, the Customer took note and soon after upgraded the Core hardware while migrating the SQL server on a new machine as well. We performed some recovery tests on the now decommissioned SQL server, simulating a high load and the live-recovery yielded great, near normal performance results.

    In conclusion, I would like to stress that it makes good sense to plan thoroughly your disaster recovery strategy and test a few likely scenarios.

    Rapid Recovery is an extremely versatile tool and even an "one size fits all" strategy would work if time is not a pressing constraint. If Live-Recovery is a must, you need to size properly your backup environment beginning way before the deployment phase or if your environment is already active, after identifying the possible bottlenecks and doing an appropriate cost/benefit analysis.
Reply
  • I would like to add to the conversation.
    I order to be able to use the Live-Recovery feature as intended, it is important to invest in the right type of hardware for the Core Server, the machine to be restored and the networking components.

    The live recovery performance depends first and foremost on the IOPS available both on the Core Server and on the machine to be restored. Networking comes in a second then, Memory and CPU hold the third place.

    Restoring Data from a repository containing many incrementals requires sustained random reads (and writes to the target). Please note that these I/O operations are performed with 8KB blocks while Windows works normally with 64KB blocks and as such the storage system needs to be able to compensate for the marginal loss of performance.

    If the Core server is performing in parallel additional operations, such as backing up other machines, running rollups, mountability and attachability tests etc., the amount of processed data may bring the storage system capabilities close to their maximum. If additional user data is circulated (the expected result of live recovery), the latency of the I/O request response will increase.

    Over the years I have performed a few restores using the Live-Recovery feature. It worked great in relatively small environments with SQL or Exchange databases of a few GB.

    In one case, when a very large environment had to be recovered, by restoring a 38 volumes high load SQL server, the response was so slow that the users could not really take advantage of the live-recovery feature (imagine a graphics heavy website over a slow cellular connection; how long until you give up?).

    On the positive side, the Customer took note and soon after upgraded the Core hardware while migrating the SQL server on a new machine as well. We performed some recovery tests on the now decommissioned SQL server, simulating a high load and the live-recovery yielded great, near normal performance results.

    In conclusion, I would like to stress that it makes good sense to plan thoroughly your disaster recovery strategy and test a few likely scenarios.

    Rapid Recovery is an extremely versatile tool and even an "one size fits all" strategy would work if time is not a pressing constraint. If Live-Recovery is a must, you need to size properly your backup environment beginning way before the deployment phase or if your environment is already active, after identifying the possible bottlenecks and doing an appropriate cost/benefit analysis.
Children
No Data