During last couple of weeks I have been actively browsing VMware communities, trying to solve users technical problems. many questions and misunderstanding was related with ESXi or ESX networking configuration, networking best practice or how to set up properly networking on ESX or ESXi servers. I decided to post recommendation about VMware vSphere4 networking configuration and vSphere networking Best Practice.
The fundamental rules in networking are as follows:
- Keep separate ESXi or ESX management traffic from other traffic (vMotion, FT, virtual machine)
- Avoid SPOF (Single Point Of Failure) – redundant physical switches, physical networks for all type of traffic
- vMotion traffic should go through not routed (separate IP subnet) network, due to security reasons (vMotion traffic is not encrypted)
- vMotion traffic needs 1Gbps connection
- Storage (NFS, iSCSI) traffic should be isolated from other type of traffic on physical level (separate pNIC and vLAN_ID)
- Storage traffic need at least 1Gbps connection to give enough throughput and low latency to storage
Scenario #1 – 4 NIC’s (1Gbps) standard switch used for MGMT, vMotion and VM traffic and iSCSI traffic
Physical host has 4 NIC (two dual ports, each has 1Gbps speed) – that hardware configuration is very common in solutions for Small and Medium business.
vSwitch0 has two vmnic’s (vmnic0 and vmnic2) and two port groups, one for ESXi management (mgmt) and second for vMotion, each vmnic is connected to different physical switch (pSwitch1 and pSwitch2). In Mgmt portgroup vmnic0 is Active and vmnic2 is Standby, in vMotion port group vmnic0 is Standby vmnic2 is Active. Using Active\Standby approach for MGMT and vMotion traffic provides separation on hardware level between both networks. What does it mean – vMotion traffic consumes as much network traffic as it can get, putting it on different vmnic one we are sure that it will not saturate mgmt vmnic and affect ESXi management traffic. Saturation of mgmt vmnic could cause host isolation and automatic (depends on policy) VM restart on other hosts in a cluster (management interface is used for hearth beats exchange between nodes in a cluster).
vmnic | location | vSwitch | portgroup | state | vLANID | pSwitch |
vmnic0 | on board | vSwtich0 | mgmt/vMotion | active in mgmt passive in vMotion |
10, 20 | Switch1 |
vmnic1 | on board | vswitch1 | VM/iSCSI | active in iSCSI passive in VM_LAN |
30, 40 | Switch1 |
vmnic2 | dual NIC 1 | vSwtich0 | mgmt/vMotion | active in vMotion passive in mgmt |
10, 20 | Switch2 |
vmnic3 | dual NIC 1 | vswitch1 | VM/iSCSI | active in VM_LAN passive in iSCSI |
30, 40 | Switch2 |
The vSwitch should be configured as follows:
- Promiscuous mode – Reject
- MAC address changes – Reject
- Forget Transmits – Reject
- Load balancing = route based on the originating virtual port ID (default)
- Network failover detection – link status only
- Notify switches – Yes
- Failback – No
vSwitch1 – is used for VM traffic and storage traffic ( here iSCSI) vSwitch1 provides fail over and full hardware redundancy because vmnic’s are connected to different physical switches and VLAN trunking is used to give connectivity for both network segments on each vmnic. Like in a vSwitch0, Active and Standby approach was used to keep both traffic separated on hardware layer and in ESXi layer.
The vSwitch1 should be configured as follows:
- Promiscuous mode – Reject
- MAC address changes – Reject
- Forget Transmits – Reject
- Load balancing = route based on the originating virtual port ID (default)
- Network failover detection – link status only
- Notify switches – Yes
- Failback – No
Implementing Active\Standby approach is very efficient way to follow VMware Best Practice and compromise between costs and performance. Above example design provides hardware redundancy and fail over for all networks and follow VMware network best practice standards.
[box type=”info”] See links below for different networking configuration
ESX and ESXi networking configuration for 4 NICs on standard and distributed switches
ESX and ESXi networking configuration for 6 NICs on standard and distributed switches
ESX and ESXi networking configuration for 10 NICs on standard and distibuted switches
ESX and ESXi networking configuration for 4 x10 Gbps NICs on standard and distributed switches
ESX and ESXi networking configuration for 2 x 10 Gbps NICs on standard and distributed switches[/box]
This is a good read and very true, there is a lot of information on various communities regarding the confusion of the setup. I have been unable to find much information of the appropriate setup for using local storage instead of a NAS/SAN. Do you have any recommendation for the network setup of a couple of hosts using local storage? The reason I say this is because, adding a NAS/SAN increases a huge cost + complexity for the SME industry using 3 hosts, unless you go for a VSA with VMware Essentials Plus. A lot of the SME industry, don’t… Read more »
How about a configuration with only 2 NICs (1 Gb)?
So what if you have 4 physical nics (max) and want to maximize bandwidth for iSCSI. While I know it’s not best practice, what about vswitch0 is mgmt/vMotion With only one nic, and vswitch1 be vm/iscsi and a second mgmt ip.
Why would you have all the Security settings set to Reject?
MAC address changes and Forget Transmits is there a concern for Accept I do have 2010 Exchange and use a Load balance VM.
Can you post the vlan configuration on the physical switch side?II will really appreciate it
Great post .
Sorry for my english.
All is clear, but please can you clarify me why in vSwitch1 you specify “Trunk” for vmnic1 and for vmnic3? Is a specific option of Esxi or (surely) somewhat that i not urderstand?
If both connection of vSwitch1 are trunked in physical switch (VLAN 30 e 40) what you mean with this “Trunk” specification (that is not present in vmnic of vSwitch0) in vmnic of vSwitch1?
I hope to be clear in my question.
Thanks in advanced.
Massimo
Can you post the configuration of the 2 physical switches? I am installing right now my soon to be production enviroment and run into doubts about the switches best configuration. Thanks
How would you divvy it up if you had 2 1G, and 2 10G nics ? In my specific case, they go to 2 sets of switches: 10g-1, and 1g-1 go to switch A, 10g-2 and 1g-2 go to switch B. And if it matters, I’m using nfs to mount the datastore from a netapp.
Thanks
Artur, a big thank you! I have found your articles very useful as a beginner to vm network design. Keep up the good work!
Hello Artur, Great article! I have a question regarding this setup (4 NICs) – I’m planning to upgrade our setup (vSphere 6 Essential Plus, 3 hosts) from a 6 x 1gig NICs [per host] to a 4 NICs, with 2x10gig and 2x1gig, in a scenario similar to the one depicted in “part1”. The 10gig NICs will be used for iSCSI_vLAN, VM_vLAN, and Mgt, while the 1gig NICs would be dedicated to vMOT. My question is regarding the physical switches: network is already “vLAN’d”, with iSCSI on a L2 vLAN, and MVs are on several L3 vLANs. I have a single… Read more »