This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Is their a best practice on how archiving to cloud storage should be configured

Is there any kind of best practice on how Dell envisage archive to cloud to be used; we are struggling to get our heads around it is designed to work. Archiving works ok for creating point in time snapshots of entire recovery chains to HDD for archival but that doesn't work too well when you storage is in the cloud as pushing TB of data around from the core isn't practical... on the other hand if you enable archive to the cloud incrementally you end up with every single recovery point saved as far as I can work out. What is the best way to say do the following (lets say as an example a 1TB machine with 10GB per day disk write, say 75% hitting the same files repeatedly several times a day over a week - thus the roll up removes the surplus intermediate points)

The RPO want to look like this

  • 1 recovery point per hour
  • after a week roll up to daily
  • after 3 months roll up to weekly
  • after 6 months roll up to monthly
  • ^^^ this bit with the rollups works fine ^^^
  • then archive the monthly snapshot to the cloud for cheap long term retention
  • and remove oldest monthly snapshot from expensive local storage

I am not wanting every hourly recovery point into the cloud for ever more, that will be prohibitively expensive, only the monthly snapshots at 6 months old; and I don't want to be uploading a new full 1TB base image per month if it can be helped but as far as I can tell that is what will need to happen as having removed the oldest snapshot you will roll up into a new base image at the oldest point.

  • Hi Chris, I checked with one of our senior engineers on this:

    We have the perfect recommendation for sending some data to the cloud but not all: the Rapid Recovery Replication Target VM in Azure. You can send the data you want to the cloud via replication and customize the retention policy. Then you can archive directly from the VM; as it is Azure to Azure data transfer it is faster and free. So a proposed solution based on your RPO would be as follows:

    • On the local Core:
    o 1 recovery point per hour
    o after a week roll up to daily
    o after 3 months roll up to weekly

    • On both the local core and on the replication target:
    o after 6 months roll up to monthly
     Replicate this to the Azure target core by setting the retention policy accordingly

    • On the Azure Replication target set the archive settings and retention policy to match:
    o then archive the monthly snapshot to the cloud for cheap long term retention
    o and remove oldest monthly snapshot from expensive local storage
    What this does is give you the retention that you are looking for plus redundancy. Couple with the cost benefit of being able to select a small instance size in Azure.

    Regarding best practices for the Rapid Recovery replication target VM in Azure, general Rapid Recovery best practices would apply.

    Best practices for BU/archiving: It’s better to keep longer retentions in the cloud than keeping them onsite. For cost considerations, you could use the Rapid Recovery Cloud Connector capability and the archive to cloud feature. This type of storage is cheaper and still offers the ability to recover data at the file level.

    ROI relative to data footprint is difficult to pinpoint, because ROI is dependent on use factors and cost. If you’re archiving from on premises to cloud directly, your ROI is easier to achieve. However, without a Rapid Recovery core in the cloud or on prem, you would not be able to recover. (In the event that the primary core was lost as in Godzilla coming and destroying the building) The combination of archive to cloud, on premises colo, and Rapid Recovery cores in Azure will give you the best possible RPO and RTO. One could argue that this peace of mind and protection is a good ROI.

    Best wishes,

    Carol
  • Rapid Recovery Cloud Connector capability? Are you referring to the ability to attach an archive without pulling the whole thing back and loading it into a repository first?

  • Hi Chris, got some info for you here: documents.software.dell.com/.../managing-cloud-accounts,

    also on archiving documents.software.dell.com/.../understanding-archives

    and documents.software.dell.com/.../archiving-to-a-cloud

    Let me hook you up w/ a solutions architect who can help you w/ answers in the context of your specific environment, we'll be connecting w/ you soon! Have a great day, Carol
  • Hi Chris, the Rapid Recovery (RR) Cloud Connector is the means to configure a Cloud Account in RR so the Core can send Archive data to the cloud. The Cloud Connectors are configured in the Cloud setting on the Core server, after you have established a connection to your cloud account you can configure RR archive to send data there.

    Archive Attachment will work for Cloud Accounts and will give you the ability to extract individual files/folders from an Archive without the need to restore the entire server back into RR. Please check our on-line documentation to make sure all requirements are met for Archive Attachment to Cloud and how Archive Attachment works first before implementing.

    Thanks, Paul

  • Chris:

    In an attempt to add to the conversation, please find below my "tuppence" re archiving issue.

    It is my understanding that you want to archive monthly recovery points only. Within some constraints this may be possible. I would try using the manual archiving option with the incremental recycle action and choose the recovery point for the month you want to archive. (see pictures below)

     Since due to rollups you have only one recovery point for the target month, since you are doing an incremental archive and since the repair orphans option is checked, this should work. On the less promising side, the archive size may be higher than you expect due to the metadata needed to keep the data consistent but anyway, you get a better deal than archiving every single recovery point.

    Now, on the theory side,

    Rapid Recovery works in recovery chains, which are composed of a base image and incrementals. In order to access a  point in time backup, you need to mount the corresponding recovery point. This means that all the recovery points down to the base image need to be participate in the process of recreating recreate the volume image corresponding to the desire recovery point. Goes without saying that the rollup process consolidates the recovery points in line with the retention policy.
    As such, you cannot have discrete recovery points unless they are base images (which in turn are recovery chains of zero length).
     
    In practical terms, your archive will have at least one base image, one incremental for each month and some additional data to preserve consistency. We cannot really predict which this additional data (which includes metadata and whatever other data is needed) will be. For instance, if you take a base image on one of your machines, the previous recovery chain will be interrupted and the archive will include, necessarily the new base image (as processed as it may be during the rollup process).
     
    Hope that this helps. Please let me know.