This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

recovery point check has failed

Hi all,

I'm looking for some information on how to handle recovery point check failures. We see them rarely but when we do we are not sure exactly what steps we should take.

When this error comes up does that mean just the most recent snapshot is bad or is the entire chain right back to the base image bad? Will the next snapshot fix the issue and we just have to make a note that on this particular date the snapshot is bad? Does appassure mark that snapshot as bad?

Its rarely the data partitions that are bad like C: or D:, its almost always the EFI partition, the recovery partition or the system volume. In our case we don't plan on booting these images, just restoring data so do we even have to worry about those?

Any guidance is appreciated.

  • Personally, i quite often ignore them and check the next day. If I get the same server fail two days running then i'll look at it, but since we export every server once per day anyway i'd find an error there as well.

    In the early days I used to check them but I have found that the most common reason (I think probably 99.99% in my case, actually it might be 100%) was simply the core was unable to finish processing in time, or failed to mount it for some unknown reason.  I have been able to mount, export, run chkdsk's etc. when the server is quieter and never had one of them fail due to this.

  • Hi ajns:

    Adding to fredbloggs reply.

    The rule of the thumb to determine if a recovery point is OK, is being able to mount it and read it. Sometimes the issues you see during recovery points checks are related to the load on the core (if too much is going on, the operation may time out before one or more partitions are mounted). If only a partition does not mount, the rest of the recovery point is OK. If a RP fails nigfhtly jobs checks but the next one is successful, it usually means that the chain is still OK. I have had a few cases where the some recovery points, although were backing up fine,  were not mounting due to the partition type that hosted it. Changing the partition type to the regular data partition type (ebd0a0a2-b9e5-4433-87c0-68b6b72699c7) solved the issue.