Sometime might happen you have to share VMFS across two different clusters for any reason. As it is not recommended to have it shared permanently sooner or later you should remove VMFS from one of the clusters.
In vSphere 4.X is bit complicated because first you have to mask LUN and then you can remove LUN from cluster zoning otherwise if you remove LUN from cluster zoning without masking it ESX server first. ESX will try to reconnect disconnected (unzoned) LUN which will eventuality cause server completely unresponsive. The only way to bring server back to life would be hard reset.
In below example I will mask and remove datastore DATA02
vSphere 4.X
- Enter host in Maintenance mode – just to be safe here 🙂
- Map LUN identifier to logical name on ESXi server – esxcfg-scsidevs –vmfs
[root@esx01 ~]# esxcfg-scsidevs --vmfs naa.60a980004176324c4f5d435752724b48:1 /dev/sde1 515ed704-d5b66e37-e0d6-441ea150f0cb DATA01 naa.60a980004176324c4f5d435752724b4a:1 /dev/sdd1 515ed71a-220a9676-b89a-441ea1501443 DATA02 naa.600508b1001c2d3dda79c7df2988a997:5 /dev/sda5 51278577-f3ee919e-7e37-441ea1501444 local naa.60a980004176324c615d433036515979:1 /dev/sdf1 515ed6ae-7b90e713-87a1-441ea1501443 DATA03 naa.60a980004176324c615d433036515977:1 /dev/sdg1 515ed6a2-461bb276-d196-441ea150f0cb DATA04 [root@esx01 ~]#
- get more details about LUN by executing command: esxcfg-mpath –L | grep naa.<LUN-number>
- HBA adapter is zoned to
- LUNID
- Target number
As you can see LUN is visible on both adapters, vmhab1 and vmhba2 on L3 (LUNID). We have to mask LUN on both adapters with LunID 3
[root@esx01 ~]# esxcfg-mpath -L | grep naa.60a980004176324c4f5d435752724b4a vmhba2:C0:T3:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba2 0 3 3 NMP active san fc.50014380072857b3:50014380072857b2 fc.500a09808fc09667:500a09868fc09667 vmhba2:C0:T8:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba2 0 8 3 NMP active san fc.50014380072857b3:50014380072857b2 fc.500a09808fc09667:500a09869fc09667 vmhba1:C0:T4:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba1 0 4 3 NMP active san fc.50014380072857b1:50014380072857b0 fc.500a09808fc09667:500a09878fc09667 vmhba1:C0:T9:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba1 0 9 3 NMP active san fc.50014380072857b1:50014380072857b0 fc.500a09808fc09667:500a09879fc09667 vmhba2:C0:T2:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba2 0 2 3 NMP active san fc.50014380072857b3:50014380072857b2 fc.500a09808fc09667:500a09848fc09667 vmhba2:C0:T7:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba2 0 7 3 NMP active san fc.50014380072857b3:50014380072857b2 fc.500a09808fc09667:500a09849fc09667 vmhba1:C0:T3:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba1 0 3 3 NMP active san fc.50014380072857b1:50014380072857b0 fc.500a09808fc09667:500a09858fc09667 vmhba1:C0:T8:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba1 0 8 3 NMP active san fc.50014380072857b1:50014380072857b0 fc.500a09808fc09667:500a09859fc09667 vmhba2:C0:T1:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba2 0 1 3 NMP active san fc.50014380072857b3:50014380072857b2 fc.500a09808fc09667:500a09828fc09667 vmhba2:C0:T6:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba2 0 6 3 NMP active san fc.50014380072857b3:50014380072857b2 fc.500a09808fc09667:500a09829fc09667 vmhba1:C0:T2:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba1 0 2 3 NMP active san fc.50014380072857b1:50014380072857b0 fc.500a09808fc09667:500a09838fc09667 vmhba1:C0:T7:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba1 0 7 3 NMP active san fc.50014380072857b1:50014380072857b0 fc.500a09808fc09667:500a09839fc09667 vmhba1:C0:T1:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba1 0 1 3 NMP active san fc.50014380072857b1:50014380072857b0 fc.500a09808fc09667:500a09818fc09667 vmhba1:C0:T6:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba1 0 6 3 NMP active san fc.50014380072857b1:50014380072857b0 fc.500a09808fc09667:500a09819fc09667 vmhba2:C0:T4:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba2 0 4 3 NMP active san fc.50014380072857b3:50014380072857b2 fc.500a09808fc09667:500a09888fc09667 vmhba2:C0:T9:L3 state:active naa.60a980004176324c4f5d435752724b4a vmhba2 0 9 3 NMP active san fc.50014380072857b3:50014380072857b2 fc.500a09808fc09667:500a09889fc09667
- list all claim rules loaded on host: esxcli corestorage claimrule list
[root@esx01 ~]# esxcli corestorage claimrule list Rule Class Rule Class Type Plugin Matches MP 0 runtime transport NMP transport=usb MP 1 runtime transport NMP transport=sata MP 2 runtime transport NMP transport=ide MP 3 runtime transport NMP transport=block MP 4 runtime transport NMP transport=unknown MP 101 runtime vendor MASK_PATH vendor=DELL model=Universal Xport MP 101 file vendor MASK_PATH vendor=DELL model=Universal Xport MP 65535 runtime vendor NMP vendor=* model=* [root@esx01 ~]#
- add claim rules for both adapters: I give rules numbers 192 and 193, and I mask all LUNs on both adapters with LUNID 3
- esxcli corestorage claimrule add –rule 192 -t location -A vmhba1 -L 3 -P MASK_PATH
- esxcli corestorage claimrule add –rule 193 -t location -A vmhba2 -L 3 -P MASK_PATH
[box type=”warning”] NOTE: In this example, run this command for rules 192 and 193. Your environment may differ based on the number that you provided in above step where you identified LUN details .[/box]
- load storage rules into system : esxcli corestorage claimrule load
- list rules on the system: esxcli corestorage claimrule list
[root@esx01 ~]# esxcli corestorage claimrule list Rule Class Rule Class Type Plugin Matches MP 0 runtime transport NMP transport=usb MP 1 runtime transport NMP transport=sata MP 2 runtime transport NMP transport=ide MP 3 runtime transport NMP transport=block MP 4 runtime transport NMP transport=unknown MP 101 runtime vendor MASK_PATH vendor=DELL model=Universal Xport MP 101 file vendor MASK_PATH vendor=DELL model=Universal Xport MP 192 runtime location MASK_PATH adapter=vmhba1 channel=* target=* lun=3 MP 192 file location MASK_PATH adapter=vmhba1 channel=* target=* lun=3 MP 193 runtime location MASK_PATH adapter=vmhba2 channel=* target=* lun=3 MP 193 file location MASK_PATH adapter=vmhba2 channel=* target=* lun=3 MP 65535 runtime vendor NMP vendor=* model=*
- Run this command on all hosts to unclaim the volume in question:
- esxcli corestorage claiming reclaim –d naa.<LUN-identyfier>
- rescan both adapters
- verify if LUN is visible on the system
[root@esx01 ~]# esxcli corestorage claiming reclaim -d naa.60a980004176324c4f5d435752724b4a [root@esx01 ~]# esxcfg-rescan vmhba1 [root@esx01 ~]# esxcfg-rescan vmhba2 [root@esx01 ~]# esxcfg-scsidevs --vmfs naa.60a980004176324c4f5d435752724b48:1 /dev/sde1 515ed704-d5b66e37-e0d6-441ea150f0cb 0 DATA01 naa.600508b1001c2d3dda79c7df2988a997:5 /dev/sda5 51278577-f3ee919e-7e37-441ea1501444 0 local naa.60a980004176324c615d433036515979:1 /dev/sdf1 515ed6ae-7b90e713-87a1-441ea1501443 0 DATA03 naa.60a980004176324c615d433036515977:1 /dev/sdg1 515ed6a2-461bb276-d196-441ea150f0cb 0 DATA04 [root@esx01 ~]#
Now, you can remove LUN from ZONING on storage system for particular cluster. When LUN was removed from ZONING deleter MASK rules from ESX hosts
- Next step is to remove both claim rules from host:
- esxcli corestorage claimrule delete –rule 192
- esxcli corestorage claimrule delete –rule 193
- esxcli corestorage claimrule delete –rule 192
- Load rules again and list to verify if mask rules are gone all rules again by execute:
- esxcli corestorage claimrule load
- esxcli corestorage claimrule list
[root@esx01 ~]# esxcli corestorage claimrule delete --rule 192 [root@esx01 ~]# esxcli corestorage claimrule delete --rule 193 [root@esx01 ~]# esxcli corestorage claimrule load [root@esx01 ~]# esxcli corestorage claimrule list Rule Class Rule Class Type Plugin Matches MP 0 runtime transport NMP transport=usb MP 1 runtime transport NMP transport=sata MP 2 runtime transport NMP transport=ide MP 3 runtime transport NMP transport=block MP 4 runtime transport NMP transport=unknown MP 101 runtime vendor MASK_PATH vendor=DELL model=Universal Xport MP 101 file vendor MASK_PATH vendor=DELL model=Universal Xport MP 65535 runtime vendor NMP vendor=* model=*
vSphere 5.X
On vSphere 5.X is much more easier, you don’t have to play with command line and claim rules, what you have to do it go to ESXi node and in storage settings right click on datastore and UNMOUNT. Remove LUN from zoning and that’s it.