vMotion VM between vSphere clusters

vMotion VM between vSphere clusters

Few days ago I had discussion with one of my colleagues about possibility to migrate VM’s between vSphere clusters. I was trying to persuade him that is possible and I did that operation hundreds times, unfortunately unsuccessfully. I decided to write blog article to resolve all doubts.

vMotion VM between vSphere clusters – vSphere 4.X and vSphere 5.X

Yes, you can migrate VM’s between vSphere clusters (even between different versions) as long as below conditions are met:

[box type=”info”]

  1. vSphere clusters must be managed by single vCenter server
  2. vSphere clusters must be within single DataCenter object in vCenter server
  3. Pre vSphere 5.1 – clusters must have an access to the same datastore.
  4. vMotion network is stretched between clusters.
  5. Processors must be from the same vendor (Inter or AMD) and family (model) on both clusters or both clusters have common EVC baseline applied.
  6. Virtual Machine hardware version is supported by hypervisor – very useful during migration to new hardware or new version of vSphere platform.
  7. If you have vDS implemented, make sure dvportgroup is span across both clusters[/box]
Use case #1 – vMotion between two vSphere 4.1 clusters
  • Both CL01 and CL02 clusters are managed by single instance if vCenter server and under single DataCenter object in vCenter
  • All ESXi hosts from CL01 and CL02 clusters are connected to single datastore
  • Validation of Ubuntu01 VM vMotion between clusters CL02 and CL01 succeeeded.

  • vMotion completed successfully

  • After vMotion Ubuntu01 VM running on CL01 cluster.
As you can see, based on above use case it is possible to vMotion VM’s between clusters without any problems.
Use case #2 – vMotion between vSphere 4.1 and vSphere 5.1 clusters

Use case used for migration to new hardware or software version (e.g from vSphere 4.1 to vSphere 5.1)

  • vCenter VC01 has two clusters CL01 – vSPhere 5.1 and CL02 – vSphere 4.1

  • Cluster CL02 has two VM’s

  • vMotion two VM’s from CL02 to CL01, validation succeeded

  • vMotion successful

  •  both VM’s were vMotion’ed on CL01 – vSphere 5.1 – without outage

Artur Krzywdzinski

Artur is Consulting Architect at Nutanix. He has been using, designing and deploying VMware based solutions since 2005 and Microsoft since 2012. He specialize in designing and implementing private and hybrid cloud solution based on VMware and Microsoft software stacks, datacenter migrations and transformation, disaster avoidance. Artur holds VMware Certified Design Expert certification (VCDX #077).

  • This is real good article!!

    • Thanks

    • artur_ka

      Thanks

  • punit dambiwal

    Good article thanks buddy 🙂

    • artur_ka

      thanks

  • Faisal Ghulam

    Good Article ,have you used Rvtools in those Pics.

    • artur_ka

      thanks, pics are from vSphere client and vSphere Web client

  • Sunny,

    Great article! You forgot to mention the restriction we have on migrating a virtual machine when connected to a distributed switch.

    If the virtual machine is connected to a distributed portgroup, that dvPortgroup needs to exist on the destination cluster as well. In other words, the dvswitch has to span both clusters.

  • Ong kok Chong

    Hi community member,

    Good evening to you. May i know how many live migration instance can be execute when migrate VM from one VM cluster A to VM cluster B?

    Many thanks for your advise in advanced.

    Ong Kok Chong

  • junlun

    hello! thanks for your sharing. I have already done this according to your essey. Howerver, what if i have nexus 1000v installed in the clusters? I am now doing a cluster upgrade from 5.0 to 5.1,including esxi and vcenter,and there are still many application working on the cluster that can’t stop. Do you have any good ideas?

    what i am supposed to do is first upgrade the vcenter to 5.1. And then move all the vms on one node to the others so that the node can be in a maintanence mode. after that i reinstall that node with esxi 5.1 iso and install the nexus vib. However, i don’t know what is the next step as i haven’t any experience with nexus 1000v vsm migration.

    • H there, hard to say, I don’t have experience with Nexus. maybe someone from community could help you out, let me tweet out, we will see

      • junlun

        Thanks.
        I am from China mainland and can’t login tweet to follow you.

        • such a shame, but you could sign up for email notification – if it is not forbidden in China.

          • junlun

            thanks for youre reminding.

    • Romain DECKER

      Hi,
      first, check the compatibility of your 1000V version with the 5.1 release you want to go. It will tell you if you need to upgrade your VSM/VEM.
      Mainly, the process you are talking about is correct.

      If you want to upgrade the 1000V as well, use the “Cisco Nexus 1000V and VMware ESX/ESXi Upgrade Utility” that quickly explains the process in a worklow : http://www.cisco.com/web/techdoc/n1kv/upgrade/utility/n1kvmatrix.html

      • Thanks for comment Romain

      • junlun

        Thanks Romain.That helps me a lot.
        I check and find that i don’t need to upgrade my vsm/vem this time.
        According to these material, I think that the next step following the process of what i have posted is just need to migrate the vsm and other vms to the new cluster of esxi 5.1 servers. Is that right?

        • Romain DECKER

          Yes, I think so (but I don’t know what exactly you have already done)

          Try with 1 ESX, and 1 VM if you are not sure of your procedure.

          • junlun

            thanks, I will try.

          • let us know how it goes, please

          • junlun

            sorry dude, i am on a business trip these days and don’t have time to try it. I will post how it goes here at once after i get the result.

          • junlun

            I am back. Last two weeks i was so busy and seldom access the internet. Ok, let’s set the ball rolling. After discussing the upgrade fair with my partners, we conclude that nexus 1000v is a vital factor that may affect the upgrade process. Therefore, we migrate the virtual network back to the vsphere standard vswitch so that we can upgrade the vcenter, and the esxi as usual. Afterwards, we reinstall the vib 5.1 in the new esxi servers. And last we migrate the virtual network back to nexus 1000v distributed network. I know this method is much more complicated than the usual way. But the application running is really important and that’s why we choose the way we do.

          • thanks for sharing

  • Hi. An other VERY important thing to check before doing the vMotion: the ESXi 5.1 datastores needs to have the same exact multipathing setup than on the ESXi 4.1. If for example the datastores on ESXi 4.1 are configured with Fixed Path, and the 5.1 are configured with Round Robin, the VM will crash and won’t boot normally anymore.

    • thanks for hint Fred

    • Vaseem Mohammed

      Does this applies to ESXi5.5 and 6 as well?

  • Vikram

    Just want to keep updated…For migrating a VM between different Clusters only below conditions can be satisified…Rest of the procedures are very simple.
    1. Both cluster should possess same CPU like Intel or AMD
    2. Both cluster should possess same VLAN..
    If above two are satisfied then you have to do a little tweak and you will be good to go migrating the VM’s between clusters “Online”