Networking configuration for ESX or ESXi Part 2

Few days ago I posted about ESX or ESXi network configuration uses 4 physical NIC’s  Networking configuration for ESX or ESXi Part 1 – 4NIC on standard switches

Today, second part of the serial, this time ESX(i) host has 6 pNIC’s (1Gbps) on Standard Switches (vSS). From security, Best Practice and my point of view 🙂 6 physical NIC’s is a smallest number. Having 6 NIC’s in a ESX(i) host will supply it with enough bandwidth, physical devices to follow networking Best Practice, security standards (perhaps not for all organizations), failover  and gives more flexibility in ESX(i) network design.

Scenario #1 – 6 NIC’s (1Gbps – 3 dual port adapters) – standard Switch for MGMT, vMotion, VM traffic, storage traffic and FT

In our scenario we have to design network for 5 different type of traffic. Each of the traffic has different vLAN ID which will help us to utilize all NIC’s for more than one traffic and optimize

  1. mgmt – VLANID  10
  2. vMotion – vLANID 20
  3. VM traffic – vLANID
  4. FT -Fault Tolerance – vLAN ID40
  5. Storage – vLANID 50
vmnic port group state trunk vSwitch pSwitch
vmnic0 mgmt\vMotion active in mgmt \ passive in vMotion vLAN10/20 vSwitch1 pSwitch1
vmnic1 VM traffic active no vSwitch2 pSwitch1
vmnic2 mgmt\vMotion active in vMotion \ passive in mgmt vLAN10/20 vSwitch1 pSwitch2
vmnic3 FT\Storage
active in FT\ Passive in storage
vLAN40/50 vSwitch3 pSwitch1
vmnic4 VM traffic active no vSwitch2 pSwitch2
vmnic5 FT\Storage active in Storage \ Passive in FT vLAN40/50 vSwitch3 pSwitch2

vSwitch0 –  as usual in my and not only my design, for management and vMotion traffic. Two vmnics, vmnic0 (from on board NIC) and vmnic2 (from first dual port adapter) are in Active/Passive mode. Active/Passive let us find the road in the middle, compromise between hardware resource which we have (only 6 NICs) and preserve for our environment full security based on hardware and network segmentation (vMotion and mgmt has different hardware and vLANs ID)  and only two physical ports are occupied by mgmt network

The vSwitch should be configured as follows:
•    Load balancing = route based on the originating virtual port ID (default)
•    Failback = no

vSwitch1 – is designated only for VM traffic, two vmnics – vmnic1 (from onboard NIC) and vmnic4 (from third dual port NIC adapter) 1Gbps x 2 reserved only for VM traffic, for 95% cases it’s more than enough.

vSwitch2 – here is a bit more complicated because  FT and Storage traffic are very demanding. VMware recommendation for FT and Storage traffic is 10Gbps but I had implemented FT on 1Gbps NIC per server (2 FT enabled VM’s per server).  This same is for storage traffic, you have to consider how much traffic you will need, how many VM’s will you have per server, what type of workloads will you have (DB’s. WEB, file servers etc). 

In above configuration it’s possible to add even add one more vLAN, for example DMZ. It can be placed in vSwitch2 together with VM traffic. But very common practice is separate DMZ completely (on hardware and software level)  from other traffic.

Below diagram show configuration which was implemented many time on many customers so for sure it will work on Your environment too. It is logical diagram where all components and connections between listed.

[box type=”info”] See links below for different networking configuration

ESX and ESXi networking configuration for 4 NICs on standard and distributed switches

ESX and ESXi networking configuration for 6 NICs on standard and distributed switches

ESX and ESXi networking configuration for 10 NICs on standard and distibuted switches

ESX and ESXi networking configuration for 4 x10 Gbps NICs on standard and distributed switches

ESX and ESXi networking configuration for 2 x 10 Gbps NICs on standard and distributed switches[/box]

Artur Krzywdzinski

Artur is Consulting Architect at Nutanix. He has been using, designing and deploying VMware based solutions since 2005 and Microsoft since 2012. He specialize in designing and implementing private and hybrid cloud solution based on VMware and Microsoft software stacks, datacenter migrations and transformation, disaster avoidance. Artur holds VMware Certified Design Expert certification (VCDX #077).

  • Habeeb Matar

    Greetings Artur,
    I’m new to VMware and this an excellent article to me as I’ve learned a lot from it. How would configure a physical host with 10 Physical Ports. I’m going to run demanindg Oracle databases on these ESX hosts.
    4x built-in 1GB ports
    4x 1GB ports from Quad Port Adapter
    2x 10GB ports from dual port Adapter for Private iSCSI traffic. This Net exists between the storage systems and ESX servers only.

    My plan is as follows
    Vnic0 & Vnic4 for MGMT
    Vnic1 & Vnic5 for Vmotion
    Vnic2,3,5,7 (aggregated) for VM traffic
    Vnic8 & vnic9 (10GB) for iSCSI & NFS.


    • HI Habeeb,

      You are very welcome 🙂 I’m glad to hear that someone find it useful.
      Which ESX(i) version you are planning to install on physical hosts ?


      • John


        How would you configure systems with 4 on-board Broadcom NICs and 1 Intel dual port NIC? Another configure with two dual port Intel and two on-board Broadcom NICs.

  • Habeeb Matar

    Hi Artur,
    I’m planning to use ESXi 5.0. I’ll build a VMware cluster with 4 Dell physical hosts. Each physical host has 10 Physical network ports and 2 HBA ports for SAN.


    • Hi Habeeb,
      Today late evening or tomorrow before noon I will post networking configuration for 10 network adapters. If you can wait, you will have something to read before your implementation 🙂

      In case you have a particular question regarding your design, I will be glad to help you


  • Craig

    I don’t see any reference to the iSCSI heartbeat vmkernel port. Which port group do you recommend that it be configured on?

    • Hi
      Thanks for comment,

      Use vSwitch2 for iSCSI traffic, you can configure it in two ways:
      1. one portgroup with vmk and both vmnic are active – as a iSCSI target discover both controllers on SAN – to have a multipath and redundancy
      2. two portgroups with two vmk and active\passive vmnic approach – as a iSCSI target discover one controller per vmk


  • David S

    Thank you, Artur, this is very useful as it addresses something that I am trying to resolve at present. I hope you won’t mind me asking at this late stage if you could clarify one issue which is why you have only allocated one physical ethernet port to iSCSI (with a standby) yet more to VM network traffic which I would expect to generate less traffic than iSCSI. Is this because there is no benefit to grouping iSCSI connections?

    To give you a background to my question, we have the beginnings of a larger system with two ESXi 4.1 Essentials Plus servers and two iSCSI arrays (10 ports on each of these).

    We have three ethernet zones: Management (low traffic outside of VMWare usage), VMs (DMZ, in effect) and iSCSI. At present these are configured in three bridged groups over six physical ethernet ports. One connection from each port goes to one of two switches separated into three VLANs, one for each zone. There is no bridging between the switches. The whole network is 1G. The redundancy appears to be working on the iSCSI side e.g. turning one switch off doesn’t kill it, but it isn’t on the VM/Management side where it seems to require a bridging group at the firewall to work at all.

    This is probably a rather basic question and may serve to do no more than highlight a limited understanding, at this stage, of the correct configuration to use.

    Many thanks


    • JasonJames122

      Good question — I had the same thought: Why only allocate 1 physical NIC to iSCSI storage traffic, but 2 NIC’s to VM traffic? Isn’t storage much more demanding than VM traffic?

      We have 6 NIC’s in each host, and our SAN vendor recommends dedicating 4 physical NIC’s (vSwitch0) for storage and 2 NIC’s (vSwitch1) for VM traffic, which makes more sense to me.

      Can you explain why you chose to do it differently here?


      • artur_ka

        Networking set up depends on several factors such as: how many different traffic types, how much IO demanding APP you have, backup window, backup type (LANFree or agent on VM), traffic priority, network type (1Gbps or 10Gbps), number of VM per host and more. In my case VM traffic is the most important, more important then Storage and FT (keep in mind that data is sent over a NIC not only from disk but also from memory).

        In you design, if you would design 4 NIC for storage and 2 for VM where would you place mgmt and vMotion ?

        BTW, storage vendors (like most of the vendors) overestimate hardware requirements. That’s the beauty of virtualization, you can start with fewer resources – not only RAM and CPU but network too – and if you need more you can addchange it without APP downtime.

        • JasonJames122

          Hi Artur, thanks for the quick reply. I see what you mean. Ours is a small environment with 8 VM’s running on 2 hosts. There is very little Mgmt or vMotion traffic, so we use the same 2 NIC’s for the VM network for VMkernel. We store a whole bunch of document images that feed our primary line-of-business application, so disk performance is important. However we are getting terrible LAN throughput speeds, so I’m starting to rethink our current config.

          In a perfect world, I would love to have ten NIC’s to work with so I could have 2 for Mgmt & vMotion (opposite active/standby adapters), 4 for VM’s, and 4 for iSCSI storage. But right now I only have 6, and the question is how best to allocate them.

          • artur_ka

            Hi, have you checked where is a bottleneck ? Is it on VM LAN network or on iSCSI LAN or maybe on storage itself ? How CPU, RAM and Storage parameters looks in esxtop ? Any variations from normal values ?
            With 6 pNIC you can set up a pretty redundant and resilient networking, in addition you will have to modify HA advance settings to avoid false positive alerts.

  • dpsguard

    Thanks so much Arthur for your great work. I am new to VMware but have strong networking and iSCSI SAN knowledge. I was looking for recommendations around iSCSI networking in vmware and came across your blog. And I immediately subscribed to receive notifications about new posts. Really good articles and appreciate yoru sharing the knowledge. So for iSCSI networking, I am trying to use HP Lefthand P4000 VSA and am simulating it at home lab running on vm workstation with couple of ESXi and VSA VM installed and using CMC to manage it. Since VSA can have two NICs and I can dedicate one for management and second for iSCSI traffic, but the VSA VM can only connect to one vswitch (in my case, I attached it to vswitch0 along with the ESXi management port) and bound to vmnic0. And then created vswitch1 and added a iSCSI vmkernal port and bound to vmnic1. since my test PC running workstation has only one NIC, which is bridged to vmnic0, I added a microsoft loopback interface, assigned address to it on the iSCSI subnet and that way vswitch1 can observe this ip range.

    Now since two vswitches can not be connected, and VM iscsi rcommendations seem to require VSA VM and iSCSI vmkernal port to be in same subnet / vswitch, then my management to VSA VM via CMC breaks down. how do i attach the two different NICs of a VM to two different vswitches? of course, I don’t want to simply shutdown the management NIC of VSA and simply manage it over iSCSI NIC, which of course works.


    • dpsguard

      Hello Arthur,

      Just wanted to confirm that I was able to figure out how to assign various vmnic to the different switches via creating different port groups. In my case, I created another vm network under vswitch1, labeled it iSCSI and then went back to the the VSA and then changed the vm network on it to the iSCSI. Now my question is that since it is now a VM network, essentially doing a pass thru of iSCSI traffic on the VSA to the the new VM network on vswitch, do we still need to create a vmkernal for iSCSI. I don’t think so as I am not using the iSCSI to initiate from the ESXi itself and instead, the individual VMs on it will be creating their own iSCSI initiators.

      Again, sorry about very basic questions, just started working on the VMware a week ago and have made overall good progress.

      Thanks and look forward to your advice.

      • artur_ka

        Hi, thanks for your comments, let me see how it works with VSA, I will keep you posted.

        • dpsguard

          Hi Arthur,

          I also found that I need to setup simple VM network for my case of iSCSI passthru to the windows VMs and not the vmkernal as I am not presenting the volume to the ESXi. So all is good. Meanwhile, in your setups of active / standby NICs, I assume that t if one of the NIC fails, we can take chance to mix the traffic / share the other NIC though it will still be logically isolated within its own VLAN being on a 802.1Q trunk.

          Thanks so much and keep up the good work.

  • Arvinth

    Hello Arthur,

    I have a question on this design where storage and FT configured in active standby,
    I hope the VMware best practices is to have 2 iscsi port groups with active and unused configuration , so that each port group will have one active uplinks and other one as unused overall 2 paths will be obtained for redundancy.

    as per the design here , will it cause any problem with active standby connetion?.