This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

LAN-Free SAN configuration using both iSCSI nic and Lan NIC?

Mostly a followup to https://www.quest.com/community/products/rapid-recovery/f/forum/21467/multiple-transport-modes-san-transport-mode-with-agent

 

I reconfigured my NASes, put a 1gbE Nic (from our physical cored) on our iSCSI VLAN, verified everything was working with pings and what not, and then added a new machine to protection via the vCenter Virtual Machines method and started a base image. Since there is no easy way to determine which transport is used, I am relying on Task Manager and watching the nics.

Curiously, my "management" nic (our live, LAN nic) is seeing about 500-600Mbps while my iSCSI only LAN-free SAN Transport nic is only seeing about 300Mbps, at the same time.

The only task running is the backup of the new machine, through vmware with CBT enabled, so all the traffic I see should be solely that VM.

Is this expected behavior? How/why is SO much traffic coming via the LAN interface, while there is still a decent amount coming via the newly configured LAN-Free SAN transport nic? To be clear, our Repo is hosted over a different set of nics entirely, so it isn't like the LAN nic is writing data to our repo. 

I have attached a hilariously bad picture to illustrate my point.

Parents
  • Thanks for the response!

    I guess I didn't expect the API and CBT to be coming via that nic, but it makes sense now that I read it. Obviously vCenter and RR couldn't talk to each other over iSCSI. I guess I just assumed it would be drastically less traffic.

    Yesterday, for example, my PROD/LAN nic was fully saturated 110MB/s while my SAN iSCSI nic was sitting at about 30-40MB/s. Total speeds are noticeably faster with SAN transport, so I guess it is working as intended.

    Now it looks like we'll just have to upgrade our LAN nic to 10GB also. Who doesn't love to spend $$$?
Reply
  • Thanks for the response!

    I guess I didn't expect the API and CBT to be coming via that nic, but it makes sense now that I read it. Obviously vCenter and RR couldn't talk to each other over iSCSI. I guess I just assumed it would be drastically less traffic.

    Yesterday, for example, my PROD/LAN nic was fully saturated 110MB/s while my SAN iSCSI nic was sitting at about 30-40MB/s. Total speeds are noticeably faster with SAN transport, so I guess it is working as intended.

    Now it looks like we'll just have to upgrade our LAN nic to 10GB also. Who doesn't love to spend $$$?
Children
No Data