This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Best way to get quickly access about 300gb of data? mount drag/drop too slow

I've setup a cross country replication (from West coast core to East coast core) of 4 machines.  (physical machines NOT virtual).  I'm testing Virtual Standby as a method of booting up one of the machines so I can get to the Exchange EDB's on cutover night.  There is about 300gb between 2 EDBs.  I'll need "quick" access to that data so I can import the mailboxes into my East enviroment.  I can't wait 5-6 hours to mount the drive and copy the data out.  

Theory is, I boot up the Virtual Standby, and can much more quickly get to the data.  BUT I had a new theory; why can't I mount the VMDK as a drive on another VM.  DONE.  I tried that and Windows didn't like the drive.  "Invalid" or something like that. 

So, my question; what are the methods to quickly get to backed up data? (when it's about 300+GB)  simple file restores are quick, 300gb is NOT quick when using traditional mount and drag/drop.

 

thanks!

Parents
  • Hi Jordan:

    Not sure that I fully understand the paragraph with 300GB/900GB. It was my understanding that you were thinking of restoring a backed up volume on a volume/partition created on your local workstation. This should work provided that the partition you create is similar in size (or marginally larger) than the volume you attempt restoring AND the agent software is installed on your workstation AND your workstation is protected by the RR core (no actual backups are needed).

    Yes, you can attach a virtual standby VMDK to a different VM. Just make sure that you attach the snapshot of the VMDK rather than the main VMDK (lack of better terms). BTW, even with zero VMWare drivers, the VM may boot (and you can install the VMware tools at a later time).

    If you have a raid 6 built on 4 drives, it means that only half of the total capacity is available. Moreover, RAID 6 has a hefty penalty. However RAID 6 is a good choice as, beside allowing 2 drive failures, it reduces the chance of disk puncture considerably. In a regular windows environment, disk punctures are way less dangerous than in the case of a RR repository.

    Talking about drives, you may have near-line SAS which, in layman's term would be SATA drives with SAS firmware. These 7.2K drives are slower and easier to produce, and with higher error thresholds which improves yields (and capacity). However, in terms of I/O operations each discrete disk can support, the 7.2K drives are markedly less performant than the faster SAS drives and they'll hit I/O saturation much faster than an equivalent number of 10K or 15K disks.

    I mentioned in my previous post the 8KB size of the RapidRecovery data block. This was chosen to achieve better compression (and I daresay it achived this goal). However, this accounts for a weaker performance for mounted recovery points (especially if in the case of long recovery chains) which needs to be compensated by the available IOPS & storage system overall speed.

     If you intend to delete all recovery points, why not delete and re-create the repository? This would save you a lot of time as the delete operations are followed by "deferred deletes" before the reclaimed repository space is made available and which may take a long time.

    I would suggest to open a case with support, have one of our engineers take a look at your environment and have him help you decide the best path to take taking into account the specifics of your environment.

Reply
  • Hi Jordan:

    Not sure that I fully understand the paragraph with 300GB/900GB. It was my understanding that you were thinking of restoring a backed up volume on a volume/partition created on your local workstation. This should work provided that the partition you create is similar in size (or marginally larger) than the volume you attempt restoring AND the agent software is installed on your workstation AND your workstation is protected by the RR core (no actual backups are needed).

    Yes, you can attach a virtual standby VMDK to a different VM. Just make sure that you attach the snapshot of the VMDK rather than the main VMDK (lack of better terms). BTW, even with zero VMWare drivers, the VM may boot (and you can install the VMware tools at a later time).

    If you have a raid 6 built on 4 drives, it means that only half of the total capacity is available. Moreover, RAID 6 has a hefty penalty. However RAID 6 is a good choice as, beside allowing 2 drive failures, it reduces the chance of disk puncture considerably. In a regular windows environment, disk punctures are way less dangerous than in the case of a RR repository.

    Talking about drives, you may have near-line SAS which, in layman's term would be SATA drives with SAS firmware. These 7.2K drives are slower and easier to produce, and with higher error thresholds which improves yields (and capacity). However, in terms of I/O operations each discrete disk can support, the 7.2K drives are markedly less performant than the faster SAS drives and they'll hit I/O saturation much faster than an equivalent number of 10K or 15K disks.

    I mentioned in my previous post the 8KB size of the RapidRecovery data block. This was chosen to achieve better compression (and I daresay it achived this goal). However, this accounts for a weaker performance for mounted recovery points (especially if in the case of long recovery chains) which needs to be compensated by the available IOPS & storage system overall speed.

     If you intend to delete all recovery points, why not delete and re-create the repository? This would save you a lot of time as the delete operations are followed by "deferred deletes" before the reclaimed repository space is made available and which may take a long time.

    I would suggest to open a case with support, have one of our engineers take a look at your environment and have him help you decide the best path to take taking into account the specifics of your environment.

Children
No Data