This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Multiple transport modes? SAN transport mode with Agent?

I am re-configuring our server and now have the option to do Agent-less on the majority of our VMs. Our Core is a physical box, and has 10gbE access to one of our NASes on the san, but not the other, so some VMs cannot be transported over that method.

 

My question is two fold: Can SAN be used for Agents as well as Agent-less? IE our Exchange and SQL servers will want to remain Agent for the Log Truncation - can they still take advantage of SAN transport? And that brings up - can one core do both, on a per-vm basis or is it all or nothing? IE 90% of our VMs are accessible via 10gbE to our physical core, but several are not (local storage on ESXi hosts). 

 

Ideally we'd have a mix of Agent and Agent-less, using SAN Transport unless they're on an unreachable datastore and then they'd use LAN NBD. Is there any way to tell which vms are using which policy, other than opening the VM and looking at NIC traffic during a backup?

  • I'll try to tackle this as best as I can, if I miss something let me know and I'll try to fill in what blanks that I can.

    The 'advanced' transport options, which is what the industry refers to HotAdd and LAN-Free SAN as, are only available for agent-lessly protected VMware VMs. If you are protecting your VMs with an agent installed on the local operating system you transport method will be over the lan (NBD) through the OS back to the Core. The transport will use whatever IPs/networks you have the agent exposed to the Core as, so if you wanted it to use a 10.1.x network and not a 10.10.x network then you'd need to add another nic to the agent machine and expose the agent to the core using the other IP scheme. Even if you did that however it would still be agent to Core via the operating systems.

    I'll stick to LAN-Free SAN since your core is a physical (Hot-Add applies to a virtual Core). The KB for that is here:

    support.quest.com/.../195634

    Basically it applies to shared ESXi storage - if your .vmdks reside on shared datastores that can be exposed to the Core in the same manner that they are exposed to your ESXi hosts, then we will read/write to those datastores in the same manner. So if you use iSCSI datastores on 10.10.x network, then using a separate/dedicated NIC on your Core expose those same iSCSI datastores to your Core using that 10.10.x network. Refer to the KB article above that goes over this (same for FC datastores).

    The agent-lessly protected nodes will auto-detect with each backup if hotadd/san/nbd is available, so you don't have to 'tell' the core to check for you, it will scan to see if it is available and if not (regardless of reason) it will failover and attempt an NBD backup.

    Currently our UI does not indicate which transport was used, our logs show us which, or as you mention the adapters tell a good story as well. This is currently a feature request that has not been implemented into the UI as of yet.

    Also want to mention (since you brought up Exchange and SQL) the agent-less log truncation functionality is currently being tested for future releases, it however is not in the currently build as of yet.

    If I missed something let me know, however that is what I have at the moment. Have a good one.
  • Thanks for the detailed reply, seems you've answered all my questions.

    Out of fear - any reports of blowing up your VMFS iSCSI? I'm guessing it's fairly safe, but having direct access from a Windows box to my SAN is a bit scary!

    Regarding the automount disable and automount scrub commands - these won't interfere with my existing iSCSI connections that I *do* want auto mounted? My repo is over iSCSI to a different NAS on the same SAN. Obviously I want my repo automounted at boot every time. I am not seeing a "select disk" command at any point in diskpart, which makes me worry.
  • Truthfully - zero. If the steps are followed in the KB they show up as unmounted volumes, which at that level the risk is zero. Furthermore if you just want to use LAN-Free SAN for just backups and not for restores, then you can expose the ESXi datastores as read only to the Core, and then you're really safe (even though you were/are before). However regardless of which VMware aware product I've used over the years, I have yet to see/hear of LAN-Free SAN having a negative impact upon a production environment.

    Funny you should ask about the scrub command. Will it interfere, no it should not. However it is also one of those 'best practice' items that is not essential to the process. A VMware recommendation yes, however if you don't run the scrub the process will still work as expected. The automount however - I would highly recommend as you don't want Windows mounting the volume on you.
  • Thanks again, but I think you missed my primary concern with the automount feature. I *do* have other iSCSI drives that need to be automounted at boot and the KB does not specify configuring just one disk to disable automount on, but appears to disable it entirely.

    I do not have any experience attempting to mix the two, or using the automount disable command in general. I am curious if it applies only to new-to-the-system disks or if iSCSI is treated as new each time it connects a disk (much like plugging in a usb drive would appear to be new, each time you plugged it in).
  • I see, you are correct, I did miss that. If you don't turn off auto-mount then the Core will mount the ESXi Datastores. It will not format them, nor initialize, but it will mount them. So you're not 'in' Pandora's box yet, but a foot away for someone to accidentally initialize them one day or something like that.

    However one the iSCSI attachments that you already use are there and active, Windows will try to mount them anyhow, as the OS is already aware of them, so they are not of a concern. Future ones though you will have to remember that Windows will not 'auto' mount or find them and you'll have to manually do that. I won't say most, but I'd guess over 1/2 of all repositories I have see are iSCSI, and existing connections are fine, and you can setup future ones, however Windows just doesn't essentially 'auto-search' for them anymore once you perform the command.

    Does that make sense? You are correct I did miss that.
  • That is a perfect response, thank you very much for your active help today! I'm going to implement these changes shortly. It will be nice to have our backups utilizing our (very small) 10gbE network rather than coming out over our 1gb LAN interfaces!