I have recently made the transition to using ESXi for all customer installs. One thing I noticed was after installing with a couple different types of media (ISO and PXE install) the servers come up with the NIC’s hard coded to 1000 Full. I have always made it a practice to keep Gigabit Ethernet at auto-configure. I was told by a wise Cisco engineer many years ago that GigE and Auto/Auto is the way to go. You can also check the Internet for articles and best practices around using auto-configure with gigabit ethernet. Even the VMware “Health Analyzer” recommends using auto. So it is perplexing to me that ESXi 4.1 would start to default to hard set. Is it just me? Has anyone else noticed this behavior?
The only reason I make an issue is I was ready to call VMware support a couple weeks ago because nothing in a DRS/HA cluster just built with 4.1 would work. One vMotion would be successful, the next would fail. Editing settings on the hosts would fail miserably when done from the vSphere Client connected to vCenter. After changing all the pNics to auto everything worked just fine (matching the switches).
Hit me up in the comments or on twitter if you have noticed this.
I saw this behavior in my environment. I use host profiles to control the setting.
Couple of articles on the subject with simple PowerCLI one-liner to reset the speed to auto-negotiate:
http://itsjustanotherlayer.com/2010/07/upgrading-to-esx-4-classic-sets-nic-speed-to-static-setting/
http://www.ivobeerens.nl/?p=537
Thanks. Now that I know to double check it. One of those scripts will help.
Gb Ethernet offers a much tighter set of negotiation protocols than Fast Ethernet did, hence the best practice for Auto from the networking community.
My 4.1 cluster in the lab was built out of the box with 1 Intel and 2 different Broadcom chipset NICs all configured at 1000/Full. I configured them all for Auto after the build was complete. This is new behavior but easily corrected with automation. It does beg the question “why?”
Thanks for bringing this to the attention of many.
I can confirm this as well — just saw it on (4) ESXi hosts I’m configuring today. Fortunately…..
esxcfg-nics -a vmnic0
for each NIC made short work of it.
I am glad to see it is not just me. Thanks for the comments.
Definitely begs the question “Why?”
Thank you for posting. I had a question about this myself.; seems counter-intuitive for optimization. esxcfg-nics -a vmnic0 on my hosts did it up right..