Networking configuration for ESX or ESXi – Part 1

During last couple of weeks I have been actively browsing VMware communities, trying to solve users technical problems. many questions and misunderstanding was related with ESXi or ESX networking configuration, networking best practice or how to set up properly networking on ESX or ESXi servers.  I decided to post recommendation about VMware vSphere4 networking configuration and vSphere networking Best Practice.

The fundamental rules in networking are as follows:
  1. Keep separate ESXi or ESX management traffic from other traffic (vMotion, FT, virtual machine)
  2. Avoid SPOF (Single Point Of Failure) – redundant physical switches, physical networks for all type of traffic
  3. vMotion traffic should go through not routed (separate IP subnet) network, due to security reasons (vMotion traffic is not encrypted)
  4. vMotion traffic needs 1Gbps connection
  5. Storage (NFS, iSCSI) traffic should be isolated from other type of traffic on physical level (separate pNIC and vLAN_ID)
  6. Storage traffic need at least 1Gbps connection to give enough throughput and low latency to storage
Scenario #1 – 4 NIC’s (1Gbps) standard switch used for MGMT, vMotion and VM traffic and iSCSI traffic

ESX ESXi 4 vmnic networking configuration

Physical host has 4 NIC (two dual ports, each has 1Gbps speed)  – that hardware configuration is very common in solutions  for Small and Medium business.

vSwitch0 has two vmnic’s (vmnic0 and vmnic2) and two port groups, one for ESXi management (mgmt) and  second for vMotion, each vmnic is connected to different physical switch (pSwitch1 and pSwitch2). In Mgmt portgroup vmnic0 is Active and vmnic2 is Standby, in vMotion port group vmnic0 is Standby vmnic2 is Active. Using Active\Standby approach for MGMT and vMotion traffic provides separation on hardware level between both networks. What does it mean – vMotion traffic consumes as much network traffic as it can get, putting it on different vmnic one we are sure that it will not saturate mgmt vmnic and  affect ESXi management traffic. Saturation of mgmt vmnic could cause host isolation and automatic (depends on policy) VM restart on other hosts in a cluster (management interface is used for hearth beats exchange between nodes in a cluster).

vmnic location vSwitch portgroup state vLANID pSwitch
vmnic0 on board vSwtich0 mgmt/vMotion active in mgmt
passive in vMotion
10, 20 Switch1
vmnic1 on board vswitch1 VM/iSCSI active in iSCSI
passive in VM_LAN
30, 40 Switch1
vmnic2 dual NIC 1 vSwtich0 mgmt/vMotion active in vMotion
passive in mgmt
10, 20 Switch2
vmnic3 dual NIC 1 vswitch1 VM/iSCSI active in VM_LAN
passive in iSCSI
30, 40 Switch2

The vSwitch should be configured as follows:

  • Promiscuous mode – Reject
  • MAC address changes – Reject
  • Forget Transmits – Reject
  • Load balancing = route based on the originating virtual port ID (default)
  • Network failover detection – link status only
  • Notify switches – Yes
  • Failback – No

vSwitch1 – is used for VM traffic and storage traffic ( here iSCSI)  vSwitch1 provides fail over and full hardware redundancy because vmnic’s are connected to  different physical switches and VLAN trunking is used to give connectivity for both network segments on each vmnic. Like in a vSwitch0, Active and Standby approach was used to keep both traffic separated on hardware layer and in ESXi layer.

The vSwitch1 should be configured as follows:

  • Promiscuous mode – Reject
  • MAC address changes – Reject
  • Forget Transmits – Reject
  • Load balancing = route based on the originating virtual port ID (default)
  • Network failover detection – link status only
  • Notify switches – Yes
  • Failback – No

Implementing Active\Standby approach is very efficient way to follow VMware Best Practice and compromise between costs and performance. Above example design provides hardware redundancy and fail over for all networks and follow VMware network best practice standards.

[box type=”info”] See links below for different networking configuration

ESX and ESXi networking configuration for 4 NICs on standard and distributed switches

ESX and ESXi networking configuration for 6 NICs on standard and distributed switches

ESX and ESXi networking configuration for 10 NICs on standard and distibuted switches

ESX and ESXi networking configuration for 4 x10 Gbps NICs on standard and distributed switches

ESX and ESXi networking configuration for 2 x 10 Gbps NICs on standard and distributed switches[/box]

Artur Krzywdzinski

Artur is Consulting Architect at Nutanix. He has been using, designing and deploying VMware based solutions since 2005 and Microsoft since 2012. He specialize in designing and implementing private and hybrid cloud solution based on VMware and Microsoft software stacks, datacenter migrations and transformation, disaster avoidance. Artur has been in IT industry since 1999 and consulting since 2008. Artur holds VMware Certified Design Expert certification (VCDX #077).

  • Gabi

    This is a good read and very true, there is a lot of information on various communities regarding the confusion of the setup.

    I have been unable to find much information of the appropriate setup for using local storage instead of a NAS/SAN.

    Do you have any recommendation for the network setup of a couple of hosts using local storage?

    The reason I say this is because, adding a NAS/SAN increases a huge cost + complexity for the SME industry using 3 hosts, unless you go for a VSA with VMware Essentials Plus.

    A lot of the SME industry, don’t need all the bells and whistles and the Essentials package is an ideal solution, as it’s the storage API that people need and vCenter really helps with the management.

    So in hindsight, for 2 or 3 hosts, what do you recommend the network configuration to be?

    Thanks,

    G.

    • Hi
      well, it doesn’t matter how many hosts you have it matters how many network adapters, switches and networks you have. having local storage (hopefully, disks are in RAID configuration to keep redundancy on storage level) does not drive your network design. Tell me how many pNIC you have, how many networks you wanna attached to hosts and I will give you advice.

      Artur

      • Gabi

        Arthur,

        Thanks for the reply, apologies for the delay, missed your reply.

        You’re right, disk/raid doesn’t affect your Network design, but storage type does i.e local/shared.

        Let’s make it simple, let’s say 4 NICS (what would your minimum recommendation be), 2 Switches, 2 Hosts, then the following;

        192.168.10.0/24 = used for workstations
        192.168.11.0/24 = used for hosts
        192.168.12.0/24 = used for servers

        Thanks,

        G.

        • Hi Gabi,

          No problem, few questions:
          are NIC 1Gbps or 10Gbps ?
          how are placed in PCI ports (single quad port adapter or 2x dual port adapters ?
          What VMware license do you have ?
          Are you going to use vLAN’s

          • Gabi

            1gb

            2x dual port.

            Essentials.

            Yes to Vlans 🙂

            Thank you very much,

            Gabi.

          • Hi Gabi,

            sorry for late answer, I was bit busy last days, this is really good scenario and very good material for a blog post :-). Trunk all vLAN to all ports, I would create one vSwitch with all for vmnic attached , create 3 portgroups (each PG per network)
            PG1 (host) vmnic0 – Active – vmnic1, vmnic2, vmnic3 – stand by
            PG2 (SRV) vmnic1, vmnic2 – Active – vmnic0, vmnic3 – stand by
            PG3 (WS) vmnic0, vmnic3 – active – vmnic1, vmnic2 – stand by

            Artur

          • Michael

            I would be very interested to see this design as well…….I am trying to configure what looks to be a very similiar setup as Gabi but am having some trouble trying to figure out what is the best implementation…

          • Hi Mike,
            thanks for comment,
            See my answer to Gabi’s comment

            Cheers
            Artur

  • Jean-Sébastien

    How about a configuration with only 2 NICs (1 Gb)?

    • Hi

      Is it for PROD or TEST\DEV environment ?

      Artur

      • Jean-Sébastien

        For now on it will be our PROD environment

        • Luigi

          Any hint about the 2 Gb nics and local storage configuration?

        • ocim

          I am also interested in a 2 nic config recommendation

          Thanks for the excellent insights

          ocim

          • sorry folks for late response, quite many things is floating around in my life,
            what VMware license do you have, is it 10Gb or 1Gb connections and what type of traffic would like put it on ?

          • ocim

            on my side, this is a VMWare 5.0 free licence server 2xXeon 5620 – 48GB Ram with 2 x 1GB connection on which I will have 3 VM:

            1 x Windows SBS 2011 for 5-10 users / 400GB
            1 x Linux CentOS cloud service for 3 users / 75GB
            1 x Linux CentOS cloud service for 10-15 users / 1TB

            I had in mind to separate traffic of the SBS and the two Linuxes and put them each on one nic, but not sure if I should have one switch or two. the Linux cloud service could be quite intensive as sync happens at any time, but also a lot of idle time.

            your comments are more than welcome!

          • you should have one vSwitch with two vmnic’s (to keep connection redundant) and if you want to make sure that SBS has enough bandwidth, create two portgroups – one for SBS (vmnic0 – active, vmnic1 – passive) and one for CentOS VM’s (vmnic0 – passive, vmnic1 – active) and Failback to NO

          • ocim

            Dear Artur, thank you so much for your recommendations, that’s of great help.
            Cheers,
            ocim

          • you are very welcome,
            BTW, in ESXi 5 free you can assing up to 32GB RAM to VM’s

          • ocim

            Yes, I know about the limitation. I am thinking on investing in an 5.1 Essentials license so I can add 2 servers in the coming year.

            Do you think migration from 5.0 free to 5.1 Essentials is a good path? Any worries I should have?

            ocim

  • Dom

    So what if you have 4 physical nics (max) and want to maximize bandwidth for iSCSI. While I know it’s not best practice, what about vswitch0 is mgmt/vMotion With only one nic, and vswitch1 be vm/iscsi and a second mgmt ip.

    • should be working, just use active\passive approach of vmnic in a portgroups

  • Chuck

    Why would you have all the Security settings set to Reject?

    MAC address changes and Forget Transmits is there a concern for Accept I do have 2010 Exchange and use a Load balance VM.

    • artur_ka

      It is by default in all my designs, unless there is a need to enable it (like your case or when some VM’s have custom MAC address or IDS systems). In case where some of the security settings need stay enabled I usually create a separate switch and connect VM’s to.

  • lferrara

    Can you post the vlan configuration on the physical switch side?II will really appreciate it

    • what do you mean exactly ? how to configure trunk on port or which vLAN configure on ports ?

  • Massimo

    Great post .

    Sorry for my english.

    All is clear, but please can you clarify me why in vSwitch1 you specify “Trunk” for vmnic1 and for vmnic3? Is a specific option of Esxi or (surely) somewhat that i not urderstand?

    If both connection of vSwitch1 are trunked in physical switch (VLAN 30 e 40) what you mean with this “Trunk” specification (that is not present in vmnic of vSwitch0) in vmnic of vSwitch1?

    I hope to be clear in my question.

    Thanks in advanced.

    Massimo

    • Hi Massimo, Trunk means you should TRUNK both vLANs on physical switch ports

      • Massimo

        Hi Artur and thanks for your reply.

        I understand this, but why only in vSwitch1 and not on vSwitch0?

        Looking your schema, you already trunk both physical connection on both physical switch.

        Which and were the difference between two vSwitch?

        Thanks for your clarification.

        • vSwitch0 is for mgmt traffic (vMotion and mgmt) – trunk for two vLANS (10,20)
          vSwitch1 id for VM and storage – trunk for two vLANS only (30,40)
          you should trunk only two vLANS per physical port

          • Massimo

            I have already understand and apply configuration that you have planned without any issue, but my misunderstood regard what “i view” in your image Blog_4_vmNIC1.jpg.

            In this img, i see writed:
            – vmnic0 and vmnic2 on vSwitch0
            – vmnic1 Trunk and vmnic3 Trunk on vSwitch1. Here my question. Why you, here, add text “Trunk” under vmnic1 and under vmnic3? Is only a “error” or you would indicate somewhat that i not understand?
            I hope to be clear now.

            Thanks another time.

          • hi Massimo, I see now, is a mistake, I have to fix it, thanks for pointing on.

          • Massimo

            Hi Artur,

            thanks to you for your great guide.

  • Stan

    Can you post the configuration of the 2 physical switches? I am installing right now my soon to be production enviroment and run into doubts about the switches best configuration. Thanks

  • Drew Marold

    How would you divvy it up if you had 2 1G, and 2 10G nics ? In my specific case, they go to 2 sets of switches: 10g-1, and 1g-1 go to switch A, 10g-2 and 1g-2 go to switch B. And if it matters, I’m using nfs to mount the datastore from a netapp.
    Thanks

  • Artur, a big thank you! I have found your articles very useful as a beginner to vm network design. Keep up the good work!

  • Doron Livny

    Hello Artur,

    Great article!

    I have a question regarding this setup (4 NICs) – I’m planning to upgrade our setup (vSphere
    6 Essential Plus, 3 hosts) from a 6 x 1gig NICs [per host] to a 4 NICs, with
    2x10gig and 2x1gig, in a scenario similar to the one depicted in “part1”.

    The 10gig NICs will be used for iSCSI_vLAN, VM_vLAN, and
    Mgt, while the 1gig NICs would be dedicated to vMOT. My question is regarding
    the physical switches: network is already “vLAN’d”, with iSCSI on a L2 vLAN,
    and MVs are on several L3 vLANs. I have a single C6500 chassis, so everything
    is controlled by a single SUP. Since iSCSI and VMs are sharing the 10gig NIC, I
    was thinking of port-channeling those ports on the switch, but PO is not
    recommended (or supported) for iSCSI. Since this is not a VMware enterprise
    plus license (Essentials Plus), no distributed switching is available, so I cannot
    use LACP. Not sure what would be the best course of action here. It looks like
    an active/passive configuration to me, and so it doesn’t require any type of HA/LB
    on the switchports, as the NICs themselves are responsible for FO. but that leaves
    us with no LB, AND, I’m not sure how this will play with MPIO for the iSCSI
    network.

    Would love to hear your take on this.
    TY,

    /DL