Few days ago I posted about ESX or ESXi network configuration uses 4 physical NIC’s Networking configuration for ESX or ESXi Part 1 – 4NIC on standard switches
Today, second part of the serial, this time ESX(i) host has 6 pNIC’s (1Gbps) on Standard Switches (vSS). From security, Best Practice and my point of view 6 physical NIC’s is a smallest number. Having 6 NIC’s in a ESX(i) host will supply it with enough bandwidth, physical devices to follow networking Best Practice, security standards (perhaps not for all organizations), failover and gives more flexibility in ESX(i) network design.
Scenario #1 – 6 NIC’s (1Gbps – 3 dual port adapters) – standard Switch for MGMT, vMotion, VM traffic, storage traffic and FT
In our scenario we have to design network for 5 different type of traffic. Each of the traffic has different vLAN ID which will help us to utilize all NIC’s for more than one traffic and optimize
- mgmt – VLANID 10
- vMotion – vLANID 20
- VM traffic – vLANID
- FT -Fault Tolerance – vLAN ID40
- Storage – vLANID 50
|vmnic0||mgmt\vMotion||active in mgmt \ passive in vMotion||vLAN10/20||vSwitch1||pSwitch1|
|vmnic2||mgmt\vMotion||active in vMotion \ passive in mgmt||vLAN10/20||vSwitch1||pSwitch2|
|vmnic5||FT\Storage||active in Storage \ Passive in FT||vLAN40/50||vSwitch3||pSwitch2|
vSwitch0 - as usual in my and not only my design, for management and vMotion traffic. Two vmnics, vmnic0 (from on board NIC) and vmnic2 (from first dual port adapter) are in Active/Passive mode. Active/Passive let us find the road in the middle, compromise between hardware resource which we have (only 6 NICs) and preserve for our environment full security based on hardware and network segmentation (vMotion and mgmt has different hardware and vLANs ID) and only two physical ports are occupied by mgmt network
The vSwitch should be configured as follows:
• Load balancing = route based on the originating virtual port ID (default)
• Failback = no
vSwitch1 – is designated only for VM traffic, two vmnics – vmnic1 (from onboard NIC) and vmnic4 (from third dual port NIC adapter) 1Gbps x 2 reserved only for VM traffic, for 95% cases it’s more than enough.
vSwitch2 – here is a bit more complicated because FT and Storage traffic are very demanding. VMware recommendation for FT and Storage traffic is 10Gbps but I had implemented FT on 1Gbps NIC per server (2 FT enabled VM’s per server). This same is for storage traffic, you have to consider how much traffic you will need, how many VM’s will you have per server, what type of workloads will you have (DB’s. WEB, file servers etc).
In above configuration it’s possible to add even add one more vLAN, for example DMZ. It can be placed in vSwitch2 together with VM traffic. But very common practice is separate DMZ completely (on hardware and software level) from other traffic.
Below diagram show configuration which was implemented many time on many customers so for sure it will work on Your environment too. It is logical diagram where all components and connections between listed.