If you have a smaller View deployment but still want to have redundant connection servers look no further than Microsoft NLB. Solve this problem without the need for an expensive hardware loadbalancer. Will it have all of the bells and whistles? No. If you have less than a 1000 users you probably would not see the benefit of the advanced features in a hardware load balancer. Make sure to read the whitepaper from VMware about NLB in Virtual Machines.
I am making the assumption you are like me and want everything to be as virtual as possible. So the View Connection Manager servers will be VM’s
Setup the primary and replica View Servers
I won’t go over installing View. Just be sure to setup the initial manager server. Then go ahead and setup the replica VM.
Configure NLB
Go the the Administrative tools and open the Network Load Balancing Manager. Right click the top node in the tree menu on the left and select New Cluster.
Set the IP and other information you will used for the Load Balanced cluster. This is a new IP not used by your View Manager servers.
In the VMware document referenced above VMware recommends setting the Cluster operation mode to Multicast.
Click Next then next again. When asked to configure port rules I leave it on the default and click next. You can chose limit this to certain ports.
Click Next again and enter localhost in the wizard to configure the local interfaces for NLB. Click next and make sure to note the priority. When setting up the replica server this number needs to be different. Finally click finish and wait for the configuration to finish. You should now be able to ping your new cluster IP address.
Setup the Replica Server in the Load Balancer
Righ Click the node in the tree menu for the NLB Cluster you just created and select Add new host to cluster. Enter the IP for the Replica Server and click connect. Select the interface that will be used for the Load Balancing and click next. Make sure the Priority is unique from the first server. If it gives you any grief after this point close and re-open the Network Load balancing Manager. The working cluster should look like this:
Test the Failover
Start a continual ping to the cluster IP. Now use the vSphere Client to disconnect the network from one of the servers. Watch the pings continue to come back.
Finally, create a DNS A record (something like desktop.yourdomain.com) and point it to the cluster IP. You now have some decent failover in case of a VM failure and even a host failure (suggestion would be to use seperate hosts for the VM’s).
Note – You may need to add static ARP entries into your switching depending on your network topology. Be sure to test this fully and consult your network manufacturer’s documention for help with static ARP.
Jon,
Interesting post, but our engineers are not to keen on MS NLB. Here’s a response from one of them, Steve Jones: “Is MS NLB the answer? In a nutshell: I personally donโt think so. My personal preference is to locate LB services within the network, since the network fabric has the best knowledge regarding MAC addresses, ARP tables, routing, etc. IOS has a builtin LB function and then there is the ACE appliance. My last job we used the CSM as out webfarm load balancer โ all of these products probe and manage load balancing using a variety of techniques.”
One possibility would be to use round-robin DNS which provides the same functionality without having to manipulate ARP. However, RRDNS canโt come close to providing the load balancing or failover mechanism. RRDNS is a zero cost, poor-mans non-deterministic, load distribution tool but is by no means a metric-driven balancing tool. On the specific question of comparison with MS NLB โ it’s probably not a good idea to set up static ARP entries. The pitfalls would outweigh the benefits.
Steve comments “If one absolutely had to choose between MS NLB and RRDNS, I would choose RRDNS. It is simple and very straightforward.”
Thanks for the comment. Yes, there are better services within the network. I wonder at what version of IOS or the switch/router spec required to run Load balancing from that layer. If it was an option available to me at no additional investment then of course I would opt for letting the network do it. I believe an ACE appliance would be on par with investing in an F5 which many would avoid in a small View deployment. I could be wrong because I have not looked at Cisco pricing in over 14 months.
I stay away from RRDNS because we want the load balancing to detect the failure and stop sending packets to the offline host.
MS NLB is included in Windows Standard, so you already paid for those licenses. This is a solution for small shops and is quick and easy, not for when you are getting into the multiple thousands of desktops.
I will also readily admit, MS NLB is not perfect and can have its own issues. ๐
Jon,
Interesting post, but our engineers are not to keen on MS NLB. Here’s a response from one of them, Steve Jones: “Is MS NLB the answer? In a nutshell: I personally donโt think so. My personal preference is to locate LB services within the network, since the network fabric has the best knowledge regarding MAC addresses, ARP tables, routing, etc. IOS has a builtin LB function and then there is the ACE appliance. My last job we used the CSM as out webfarm load balancer โ all of these products probe and manage load balancing using a variety of techniques.”
One possibility would be to use round-robin DNS which provides the same functionality without having to manipulate ARP. However, RRDNS canโt come close to providing the load balancing or failover mechanism. RRDNS is a zero cost, poor-mans non-deterministic, load distribution tool but is by no means a metric-driven balancing tool. On the specific question of comparison with MS NLB โ it’s probably not a good idea to set up static ARP entries. The pitfalls would outweigh the benefits.
Steve comments “If one absolutely had to choose between MS NLB and RRDNS, I would choose RRDNS. It is simple and very straightforward.”
Thanks for the comment. Yes, there are better services within the network. I wonder at what version of IOS or the switch/router spec required to run Load balancing from that layer. If it was an option available to me at no additional investment then of course I would opt for letting the network do it. I believe an ACE appliance would be on par with investing in an F5 which many would avoid in a small View deployment. I could be wrong because I have not looked at Cisco pricing in over 14 months.
I stay away from RRDNS because we want the load balancing to detect the failure and stop sending packets to the offline host.
MS NLB is included in Windows Standard, so you already paid for those licenses. This is a solution for small shops and is quick and easy, not for when you are getting into the multiple thousands of desktops.
I will also readily admit, MS NLB is not perfect and can have its own issues. ๐
RRDNS? Not even close in terms of failover management and monitoring/control. I suggest a FOSS load balancer, such as the excellent haproxy.
Pros: excellent speed, extremely manageable, lb algos similar (even better often) to hw balancers, foss ๐
Cons: requires at least another vm (or two, better), but can be made highly available itself (via lvs, keepalived, etc).
Performance isn’t an issue unless you have to deal with SERIOUS traffic (I’m talking more than 10GB/s).
RRDNS? Not even close in terms of failover management and monitoring/control. I suggest a FOSS load balancer, such as the excellent haproxy.
Pros: excellent speed, extremely manageable, lb algos similar (even better often) to hw balancers, foss ๐
Cons: requires at least another vm (or two, better), but can be made highly available itself (via lvs, keepalived, etc).
Performance isn’t an issue unless you have to deal with SERIOUS traffic (I’m talking more than 10GB/s).
A little tip when using NLB in this way is to configure the Recovery settings on the VMware service so that the NLB service stops. That way, if the Connection Server service stops, connections are no longer sent to the node because it gets removed from the cluster.
For example, configure the first and second actions to restart the service but configure the third action to run a program “c:windowssystem32wlbs.exe” with the “stop” parameter
A little tip when using NLB in this way is to configure the Recovery settings on the VMware service so that the NLB service stops. That way, if the Connection Server service stops, connections are no longer sent to the node because it gets removed from the cluster.
For example, configure the first and second actions to restart the service but configure the third action to run a program “c:windowssystem32wlbs.exe” with the “stop” parameter