Networking configuration for ESX or ESXi – Part 1

During last couple of weeks I have been actively browsing VMware communities, trying to solve users technical problems. many questions and misunderstanding was related with ESXi or ESX networking configuration, networking best practice or how to set up properly networking on ESX or ESXi servers.  I decided to post recommendation about VMware vSphere4 networking configuration and vSphere networking Best Practice.

The fundamental rules in networking are as follows:
  1. Keep separate ESXi or ESX management traffic from other traffic (vMotion, FT, virtual machine)
  2. Avoid SPOF (Single Point Of Failure) – redundant physical switches, physical networks for all type of traffic
  3. vMotion traffic should go through not routed (separate IP subnet) network, due to security reasons (vMotion traffic is not encrypted)
  4. vMotion traffic needs 1Gbps connection
  5. Storage (NFS, iSCSI) traffic should be isolated from other type of traffic on physical level (separate pNIC and vLAN_ID)
  6. Storage traffic need at least 1Gbps connection to give enough throughput and low latency to storage
Scenario #1 – 4 NIC’s (1Gbps) standard switch used for MGMT, vMotion and VM traffic and iSCSI traffic

ESX ESXi 4 vmnic networking configuration

Physical host has 4 NIC (two dual ports, each has 1Gbps speed)  – that hardware configuration is very common in solutions  for Small and Medium business.

vSwitch0 has two vmnic’s (vmnic0 and vmnic2) and two port groups, one for ESXi management (mgmt) and  second for vMotion, each vmnic is connected to different physical switch (pSwitch1 and pSwitch2). In Mgmt portgroup vmnic0 is Active and vmnic2 is Standby, in vMotion port group vmnic0 is Standby vmnic2 is Active. Using Active\Standby approach for MGMT and vMotion traffic provides separation on hardware level between both networks. What does it mean – vMotion traffic consumes as much network traffic as it can get, putting it on different vmnic one we are sure that it will not saturate mgmt vmnic and  affect ESXi management traffic. Saturation of mgmt vmnic could cause host isolation and automatic (depends on policy) VM restart on other hosts in a cluster (management interface is used for hearth beats exchange between nodes in a cluster).

vmnic location vSwitch portgroup state vLANID pSwitch
vmnic0 on board vSwtich0 mgmt/vMotion active in mgmt
passive in vMotion
10, 20 Switch1
vmnic1 on board vswitch1 VM/iSCSI active in iSCSI
passive in VM_LAN
30, 40 Switch1
vmnic2 dual NIC 1 vSwtich0 mgmt/vMotion active in vMotion
passive in mgmt
10, 20 Switch2
vmnic3 dual NIC 1 vswitch1 VM/iSCSI active in VM_LAN
passive in iSCSI
30, 40 Switch2

The vSwitch should be configured as follows:

  • Promiscuous mode – Reject
  • MAC address changes – Reject
  • Forget Transmits – Reject
  • Load balancing = route based on the originating virtual port ID (default)
  • Network failover detection – link status only
  • Notify switches – Yes
  • Failback – No

vSwitch1 – is used for VM traffic and storage traffic ( here iSCSI)  vSwitch1 provides fail over and full hardware redundancy because vmnic’s are connected to  different physical switches and VLAN trunking is used to give connectivity for both network segments on each vmnic. Like in a vSwitch0, Active and Standby approach was used to keep both traffic separated on hardware layer and in ESXi layer.

The vSwitch1 should be configured as follows:

  • Promiscuous mode – Reject
  • MAC address changes – Reject
  • Forget Transmits – Reject
  • Load balancing = route based on the originating virtual port ID (default)
  • Network failover detection – link status only
  • Notify switches – Yes
  • Failback – No

Implementing Active\Standby approach is very efficient way to follow VMware Best Practice and compromise between costs and performance. Above example design provides hardware redundancy and fail over for all networks and follow VMware network best practice standards.

[box type=”info”] See links below for different networking configuration

ESX and ESXi networking configuration for 4 NICs on standard and distributed switches

ESX and ESXi networking configuration for 6 NICs on standard and distributed switches

ESX and ESXi networking configuration for 10 NICs on standard and distibuted switches

ESX and ESXi networking configuration for 4 x10 Gbps NICs on standard and distributed switches

ESX and ESXi networking configuration for 2 x 10 Gbps NICs on standard and distributed switches[/box]

Artur Krzywdzinski

Artur is Consulting Architect at Nutanix. He has been using, designing and deploying VMware based solutions since 2005 and Microsoft since 2012. He specialize in designing and implementing private and hybrid cloud solution based on VMware and Microsoft software stacks, datacenter migrations and transformation, disaster avoidance. Artur holds VMware Certified Design Expert certification (VCDX #077).