*Disclaimer – I work for a Xsigo and VMware partner.
I was in the VMware View Design and Best practices class a couple weeks ago. Much of the class is built on the VMware View Reference Architecture. The picture below is from that PDF.
It really struck me how many IO connections (Network or Storage) it would take to run this POD. Minimum (in my opinion) would be 6 cables per host with ten 8 host clusters that is 480 cables! Let’s say that 160 of those are 4 gb Fiberchannel and the other 320 are 1 gb ethernet. The is 640 gb for storage and 320 for network.
Xsigo currently uses 20 gb infiniband and best practice would be to use 2 cards per server. The same 80 servers in the above cluster would have 3200 gb of bandwidth available. Add in the flexibility and ease of management you get using virtual IO. The cost savings in the number director class fiber switches and datacenter switches you no longer need and the ROI I would think the pays for the Xsigo Directors. I don’t deal with pricing so this is pure contemplation. So I will stick with the technical benefits. Being in the datacenter I like any solution that makes provisioning servers easier, takes less cabling, and gives me unbelievable bandwidth.
So just in the way VMware changed the way we think about the datacenter. Virtual IO will once again change how we deal with our deployments.
hi
nice article by the way can you send some information about that course you took…any more classes available? where?
thanks a lot
karlo,
The class was View Design Best Practices
hi
nice article by the way can you send some information about that course you took…any more classes available? where?
thanks a lot
karlo,
The class was View Design Best Practices
beside Xsigo, FCoE could also does the similar thing today. But of course, Infiniband provide bigger bandwidth in this case. If you looking for simplify storage management and move forward with unified storage, FCoE may provide better integration which EMC and Netapp both supported FCoE end to end today and avoid FC switches to be in place to reduce capex nd opex requirement.
FCoE requires a significant investment in your network infrastructure, but yes is the same concept of reducing complexity.
Xsigo is a great alternative that will support your storage, and often in my opinion gets lost in the mix under the Cisco/EMC marketing machine pushing FCoE, but Xsigo will have FCoE cards for their directors, so everyone can come play.
beside Xsigo, FCoE could also does the similar thing today. But of course, Infiniband provide bigger bandwidth in this case. If you looking for simplify storage management and move forward with unified storage, FCoE may provide better integration which EMC and Netapp both supported FCoE end to end today and avoid FC switches to be in place to reduce capex nd opex requirement.
FCoE requires a significant investment in your network infrastructure, but yes is the same concept of reducing complexity.
Xsigo is a great alternative that will support your storage, and often in my opinion gets lost in the mix under the Cisco/EMC marketing machine pushing FCoE, but Xsigo will have FCoE cards for their directors, so everyone can come play.
just my 2 cents, as the marketing push about FCoE are not from EMC, in fact Netapp are the 1st storage vendor who adopted to it. Qlogic and Emulex are the earlier adopter to the technology as well. Both technology does deliver the requirement and I will say depend on the infrastructure and requirement you need. Both has it pros & Cons and they are great products to the market 🙂
just my 2 cents, as the marketing push about FCoE are not from EMC, in fact Netapp are the 1st storage vendor who adopted to it. Qlogic and Emulex are the earlier adopter to the technology as well. Both technology does deliver the requirement and I will say depend on the infrastructure and requirement you need. Both has it pros & Cons and they are great products to the market 🙂
Rumor has it that NetApp is working on a Filer with InfiniBand connections right in the back of the array. Also, last I read, FCoE is not a fully ratified standard. InfiniBand has been around for years and 40GB Xsigo fabric is not far away, probably Q4 2010. The only “gotcha” to keep in mind when virtualizing adaptors is the catch 22 that comes into play. You can’t create a server profile and carve out vNICs until you have an OS installed and the drivers loaded. If you use an automated build process for your ESX hosts, you might need to get creative until VMware puts the drivers into the ESX media. Building an ISO and booting from your baseboard management controller could suffice. Just something to keep in mind.
It would be nice to “slipstream” the drivers into the ISO, but the “gotcha” is a good thing to keep in mind.
Rumor has it that NetApp is working on a Filer with InfiniBand connections right in the back of the array. Also, last I read, FCoE is not a fully ratified standard. InfiniBand has been around for years and 40GB Xsigo fabric is not far away, probably Q4 2010. The only “gotcha” to keep in mind when virtualizing adaptors is the catch 22 that comes into play. You can’t create a server profile and carve out vNICs until you have an OS installed and the drivers loaded. If you use an automated build process for your ESX hosts, you might need to get creative until VMware puts the drivers into the ESX media. Building an ISO and booting from your baseboard management controller could suffice. Just something to keep in mind.
It would be nice to “slipstream” the drivers into the ISO, but the “gotcha” is a good thing to keep in mind.