Dynamic Cluster Pooling is an idea that Kevin Miller ( @captainstorage) and I came up with one day while we were just rapping out some ideas on the whiteboard. It is an incomplete idea, but may have the beginnings of something useful. The idea is that clusters can be dynamically sized depending on expected workload. Today a VMware Cluster is sized based on capacity estimates from something like VMware Capacity Planner. The problem is this method requires you apply a workload profile across all time periods or situations. What if only a couple days of the month require the full capacity of a cluster. Could those resources be used elsewhere the rest of the month?
Example Situation
Imagine a scenario with a Virtual Infrastructure with multiple clusters. Cluster “Gold” has 8 hosts. Cluster “Bronze” has 8 hosts. Gold is going to require additionally resources on the last day of the month to process reports from a database (or something like that). In order to provide additional resources to Gold we will take an ESX host away from the Bronze cluster. This allows us to deploy additional Virtual Machines to crunch through the process or allow less contention for the existing machines.
You don’t have to be a powercli guru to figure out how to vMotion all the machines off of a ESX host and place it in maintenance mode. Once the host is in maintenance mode it can be moved to the new cluster, removed from maintenance mode and VM’s can be redistributed by DRS.
Sample Code more to prove the concept:
#Connect to the vCenter
Connect-VIServer [vcenterserver]
#indentify the host, you should pass the host or hosts you want to vacate into a variable
Get-Cluster Cluster-Bronze | get-vmhost
#Find the least loaded host(skipping for now)
#Vmotion the machines to somewhere else in that cluster
Get-VMHost lab1.domain.local | Get-VM| Move-VM -Destination [some other host in the bronze cluster]
#Move the host
Set-VMHost lab1.domain.local -State Maintenance
Move-VMHost lab1.domain.local -Destination Cluster-Gold
Set-VMHost lab1.domain.local -State Connected
#Rebalance VM's
Get-DrsRecommendation -Cluster Cluster-Gold | Apply-DrsRecommendation
I was able to manually make this happen in our lab. Maybe if this sparks any interest someone that is good with “the code” can make this awesome.
Nice thinking outside the box, but why not use the max number of host and create resource pools. Write a Power-CLI script that changes the resource allocation settings on the resource pool depending on the amount of virtual machines and their worst case allocation. Compare that to trending figures and adjust the values of the script based on this.
This way you should not have to make your host “cluster-compliant” with all its settings and mappings to LUNs.
Thanks for the comment Frank. You are right a RP and some dynamic shares scripting would be easier. Some people are scared of resource pools, due to the high likelyhood of doing them wrong.
Unless we get a VCDX to come design the resource layout. 🙂
Although it would be better to learn proper resource pool usage, than to develop a complex way of moving host around from cluster to cluster.
Thanks again.
I agree with Jon and Frank; Resource pools would be a better way I think. Cluster sizing is done for a number of reasons and not all of them are to do with resource utilization. For instance, in a blade scenario, HA primaries need to be taken into account and clusters stretched across multiple chassis’.
Personally, I have taken the approach of using Shares to give weighted priority to VM’s using Gold, Silver, Bronze model. The only issue here is that RP’s are not appropriately weighted for the number of VM’s they contain (as noted in a few places). I handled this problem modifying a script from Duncan Epping and Andrew Mitchel.
Details are available on my blog: http://blog.cnidus.net/2010/12/21/custom-shares-on-a-resource-pool-scripted-modified/ if anyone is interested.
Obviously, there are a lot of different ways to handle dynamic load scenarios. Perhaps another way would be to migrate VM’s between clusters in certain scenarios, inter-cluser DRS if you will…..
Just my 2c anyway.