PowerHA SystemMirror 6.1 to 7.1.3 Rolling Migration

PowerHA SystemMirror 6.1 to 7.1.3 Rolling Migration

PowerHA SystemMirror 6.1 to 7.1.3 Rolling Migration

Whenever we are going to perform PowerHA SystemMirror rolling or snapshot migration from 6.1 to 7.1.3 we have to full fill the prerequisites.

This document will be going to explain how make readiness for Power HA SystemMirror migration from 6.1 to 7.1.3

First we will discuss abut PowerHA SystemMirror migration types and prerequisites.

Type of PowerHA Migrations:
1. Offline Migration
2. Rolling Migration
3. Snapshot Migration

Offline Migration:

* In this migration bring down entire PowerHA cluster, Install/upgrade Aix operating system OS level and PowerHA 7.1.3 latest version. Restart the cluster service one node at a time.

* Actually this is not a real cluster migration, it is normal migration why because all the cluster nodes down at the time of PowerHA migration.

Rolling Migration:

* This is real cluster migration. During the rolling migration workload has been moved to another node. Complete rolling migration prerequisites using clmigcheck utility.

* Then upgrade Aix operating system OS level and PowerHA SystemMirror 7.13. start cluster services and move the workloads into latest version node.

* These same steps follow on each node on this cluster.

Snapshot Migration:

* This is not real migration. Why because customer would remove the previous version of PowerHA SystemMirror and install the new version of PowerHA SystemMirror 7.1.3.0.

* Then configure PowerHA SystemMirror 7.1.3 interface, remaining configuration will be restoring from existing configuration snapshot through command line or smitty menu.

PowerHA SystemMirror 6.1 to 7.1.3 Migration Prerequisites

1. /etc/hosts file verification.

For example if you are having two node HACMP cluster. Each node /etc/host should include.

NODE1+NODE2:# cat /etc/hosts
192.168.1.1 NODE1
192.168.1.1 NODE2
10.10.10.1 NODE1BOOT
10.10.10.2 NODE2BOOT
192.168.1.100 NODESVC

(OR) If you are configured domain make the host entry as per below example.

NODE1+NODE2:# cat /etc/hosts
192.168.1.1 NODE1.xxx.yyy.com NODE1
192.168.1.1 NODE2.xxx.yyy.com NODE2
10.10.10.1 NODE1BOOT
10.10.10.2 NODE2BOOT
192.168.1.100 NODESVC

2. Verify there is only 1 uncommented line in /etc/netsvc.conf file.

Make sure take backup of /etc/netsvc.conf file remove all the entry’s form this file kept only one line as per mentioned below line.

NODE1+NODE2:# cat /etc/netsvc.conf
hosts = local4,bind4

3. PowerHA 6.1 latest Service Pack install/update verification. Recommended level is 6.1.0.15.

NODE1+NODE2:# lslpp -l | grep -i .cluster* | grep -i 6.1.0.16
cluster.es.cspoc.cmds    6.1.0.16 COMMITTED ES CSPOC Commands
cluster.es.server.events 6.1.0.16 COMMITTED ES Server Events
cluster.es.server.rte    6.1.0.16 COMMITTED ES Base Server Runtime
cluster.es.server.utils  6.1.0.16 COMMITTED ES Server Utilities
cluster.es.server.rte    6.1.0.16 COMMITTED ES Base Server Runtime
cluster.es.server.utils  6.1.0.16 COMMITTED ES Server Utilities

4. ODM cuAT inet0 hostname value and HACMPnode COMMUNICATION_PATH value verification.

For CAA(Cluster Aware Aix) cluster we need same values for ODM cuAT inet0 hostname and HACMPnode COMMUNICATION_PATH

NODE1:# lsattr -El inet0 | grep -i hostname
hostname NODE1 Host Name True
NODE2:# lsattr -El inet0 | grep -i hostname
hostname NODE2 Host Name True
NODE1+NODE2:# odmget HACMPnode | grep -p COMM
HACMPnode:
      name = "NODE1"
      object = "COMMUNICATION_PATH"
      value = "192.168.1.1"
      node_id = 3
      node_handle = 3
      version = 11

HACMPnode:
     name = "NODE2"
     object = "COMMUNICATION_PATH"
     value = "192.168.1.2"
     node_id = 4
     node_handle = 4
     version = 11

Note: In PowerHA 6.1 COMMUNICATION_PATH may be show as IP Address that is ok.

5. Create PowerHA 6.1 cluster snapshot and Aix system mksysb or alt_disk_copy backup of rootvg.

NODE1+NODE2:# alt_disk_copy -d hdisk1 -B
(or)
NODE1+NODE2:# mksysb -X /mnt/mksysb_hostname-Full-bkp

6. Cluster has no pending synchronization while still on PowerHA 6.1.

NODE1+NODE2:# odmget HACMPcluster | grep handle
handle = 0

It means if both the nodes have any un synchronized cluster changes, we have to resolve this issue immediately.

Note: If you have any difference find out the configuration changes and re synchronize the cluster.

Since Verify/Sync cannot be performed during the migration.

7. Installation of PowrHA Migration CAA file sets and CAA/RSCT Bundles.

For Snapshot migration:

* Stop PowerHA on all the nodes carefully bring down resource groups carefully.

* If PowerHA configured to startup automatically at system boot. Then temporarily disable the from /etc/iniitab
entry.

NODE1+NODE2:# cat /etc/inittab | grep -i hacmp
cat /etc/inittab | grep -i hacmp
NODE1+NODE2:# rmitab hacmp

* Install Aix/CAA file sets for PowerHA 7.1 on all nodes (if not installed)

For Rolling migration:

* Install Aix/CAA file sets for PowerHA 7.1 on one node at (if not installed) after complete PowerHA migration follow the same procedure an all other node.

Recommended Aix Operating level is:

AIX 6.1 TL09 SP1, or higher
(OR)
AIX 7.1 TL03 SP1, or higher

The Following AIX/CAA and RSCT should be installed/updated before start the PowerHA migration.

NODE1+NODE2:# lslpp -l | grep -i bos.ahafs
bos.ahafs 6.1.9.100 COMMITTED Aha File System
bos.ahafs 6.1.9.100 COMMITTED Aha File System
NODE1+NODE2:# lslpp -l | grep -i bos.cluster.rte
bos.cluster.rte 6.1.9.100 COMMITTED Cluster Aware AIX
bos.cluster.rte 6.1.9.100 COMMITTED Cluster Aware AIX
NODE1+NODE2:# lslpp -l | grep -i devices.common.IBM.storfwork.rte
devices.common.IBM.storfwork.rte
devices.common.IBM.storfwork.rte
NODE1+NODE2:# lslpp -l | grep -i clic
clic.rte.kernext 4.10.0.1 COMMITTED CryptoLite for C Kernel
clic.rte.lib 4.10.0.1 COMMITTED CryptoLite for C Library
clic.rte.kernext 4.10.0.1 COMMITTED CryptoLite for C Kernel
NODE1+NODE2:# lslpp -l | grep -i bos.clvm.enh
bos.clvm.enh 6.1.9.45 APPLIED Enhanced Concurrent Logical
bos.clvm.enh 6.1.0.0 COMMITTED Enhanced Concurrent Logical

8. Install/update RSCT (Reliable Scalable Cluster Technology) file stets.

NODE1+NODE2:# lslpp -l | grep -i rsct.core.rmc
rsct.core.rmc 3.2.0.5 APPLIED RSCT Resource Monitoring and
rsct.core.rmc 3.2.0.5 APPLIED RSCT Resource Monitoring and
NODE1+NODE2:# lslpp -l | grep -i rsct.basic
rsct.basic.hacmp 3.2.0.2 APPLIED RSCT Basic Function (HACMP/ES
rsct.basic.rte 3.2.0.4 APPLIED RSCT Basic Function
rsct.basic.sp 3.2.0.0 COMMITTED RSCT Basic Function (PSSP
rsct.msg.EN_US.basic.rte 2.5.4.0 COMMITTED RSCT Basic Msgs - U.S. English
rsct.msg.en_US.basic.rte 2.5.4.0 COMMITTED RSCT Basic Msgs - U.S. English
rsct.basic.rte 3.2.0.4 APPLIED RSCT Basic Function
NODE1+NODE2:# lslpp -l | grep -i rsct.compat.basic.hacmp
rsct.compat.basic.hacmp 3.2.0.0 COMMITTED RSCT Event Management Basic
NODE1+NODE2:# lslpp -l | grep -i rsct.compat.clients.hacmp
rsct.compat.clients.hacmp 3.2.0.0 COMMITTED RSCT Event Management Client

1. Ensure that Virtual I/O Server ioslvel should be 2.2.0.1-FP24-SP01 or later is installed.

2. Reboot the node if snapshot migration take reboot all the nodes. For rolling migration no need to take reboot the node.

3. Verify CAA has been successfully added into inetd.conf, services and inittab.

NODE1+NODE2:# cat /etc/services | egrep "caa|clusterconf"
clcomd_caa 16191/tcp
caa_cfg 6181/tcp

NODE1+NODE2:# cat /etc/inittab | egrep "caa|clusterconf"
clusterconf:23456789:once:/usr/sbin/clusterconf

NODE1+NODE2:# cat /etc/inetd.conf |egrep "caa|clusterconf"
caa_cfg stream tcp6 nowait root /usr/sbin/clusterconf clusterconf >>/var/adm/ras/clusterconf.log 2>&1

To check the disk reservation policy. For configuring repository disk

NODE1+NODE2:# devrsrv -c query -l hdisk4
Device Reservation State Information
============================================
Device Name : hdisk4
Device Open On Current Host? : NO
ODM Reservation Policy : NO RESERVE
Device Reservation State : NO RESERVE

If you need to be change:

NODE1+NODE2:# chdev -l hdiskX -a reserve_policy=no_reserve

* Verify if that disk previously never used for repository purpose.

NODE1+NODE2:# /usr/lib/cluster/clras dumprepos -r hdisk4
ERROR: Could not obtain repository data from hdisk4.

Note: It means this disk before never used for repository purpose.

=> this means hdisk4 is OK to be used as disk repository.

9. Refresh the recommended system/cluster services.

NODE1+NODE2:# lssrc -ls inetd
NODE1+NODE2:# refresh -s syslogd
NODE1+NODE2:# lssrc -ls clcomd
NODE1+NODE2:# refresh -s clcomd

10. Put the notes entry on /etc/cluster/rhost file on all the nodes.

NODE1+NODE2:# cat /etc/cluster/rhosts
NODE1
NODE2

11. Run clmigcheck utility for PowerHA systemMirror migration prerequisites purpose.

If you are facing any errors run the clmigcheck again verify the log file.

NODE2+NODE1:# /tmp/clmigcheck/clmigcheck.log

If there is no error on your HA configuration prerequisites. You will be get below mentioned output.  it means your HACMP Cluster ready to Migrate.
clmigcheck

12. Clmigcheck output

NODE2:# clmigcheck
------------[ PowerHA SystemMirror Migration Check ]----
Please select one of the following options:
        1 ->  Enter the version you are migrating to.
        2 ->  Check ODM configuration.
        3 ->  Check snapshot configuration.
        4 ->  Enter repository disk and IP addresses.
Select one of the above, "x" to exit, or "h" for help:

PowerHA SystemMirror 6.1 to 7.1.3.4 Rolling Migration

13. Verify both the nodes boot IP and non-boot IP interfaces

NODE2:# cllsif
Adapter        Type       Network     Net Type   Attribute    Node       IP Address    Hardware
NODE1HB      service    net_diskhb_01   diskhb     serial     NODE1     /dev/hdisk4    hdisk4  
NODE1boot1     boot      net_ether_01   ether      public     NODE1      10.10.10.1    en1    
NODE2HB      service    net_diskhb_01   diskhb     serial     NODE2     /dev/hdisk4    hdisk4  
NODE2boot1     boot      net_ether_01   ether      public     NODE2      10.10.10.2    en1
NODE1:# cllsif
Adapter        Type       Network     Net Type   Attribute    Node       IP Address     Hardware
NODE1HB      service    net_diskhb_01   diskhb     serial     NODE1      /dev/hdisk4    hdisk4
NODE1boot1    boot      net_ether_01    ether      public     NODE1      10.10.10.1     en1
NODE2HB     service    net_diskhb_01   diskhb      serial     NODE2      /dev/hdisk4    hdisk4
NODE2boot1    boot      net_ether_01    ether      public     NODE2      10.10.10.2     en1

14. Move the workload into Primary node. Run the clmigcheck utility.

15. Choose the Press 1 to continue for selecting Power HA migration Version.
————[ PowerHA SystemMirror Migration Check ]————-
Please select one of the following options:
15-1. Enter the version you are migrating to.
PowerHA SystemMirror Migration Check

16. 7.1.3 Choose your PowerHA SystemMirror Version.
Choose PowerHA SystemMirror Version

17. Check ODM configuration.
Check ODM configuration.

18. Press Enter to remove unsupported hardware net_diskhb_01.
Press Enter to remove unsupported hardware net_diskhb_01

19. The ODM has no unsupported elements press enter to continue.
The ODM has no unsupported elements press enter to continue

20. Enter repository disk and IP addresses.
Enter repository disk and IP addresses

21. UNICAST for unicast messaging for heartbeat
UNICAST for unicast messaging for heartbeat

22. Chose sharable disk for repository configuration.
Chose sharable disk for repository configuration

23. Repository disk has been selected for Aix CAA configuration.
Repository disk has been selected for Aix CAA configuration

24. Now our nodes is ready for PowerHA SystemMirror software up-gradation.
Upgrade SystemMirror Enterprise Edition and update into latest 7.1.3.4.

NODE2:# smitty update_all

25. Review the /tmp/clconvert.log log file
/tmp/clconvert.log log file

26 Start the cluster services on NODE2 which is currently updated.

NODE2:# smitty clstart

NODE2:# smitty clstart

27. Run #qha -n script on NOD1 fetch the current status for the cluster.
qha -n script on NOD1

PowerHA qha script available on below link you can download it.
http://abderra.webspace.virginmedia.com/STUFF/qha901

28. Cluster on JOINING mode.
Cluster on JOINING mode

29. Cluster on BARRIER mode.
Cluster on BARRIER mode

30. Cluster on STABLE mode.
Cluster on STABLE mode

31. Verify the cluster status. If there is no error, it means our migration process is correct.

32. These above same steps follow on NODE1 also.

33. Move the resource group to NODE2 which is having latest version of the cluster.
Move the resource group to NODE2

NODE2:# qha -n

qha -n NODE2

34. Go to NODE2 refresh the clcomd deman.

 NODE1:# refresh -s clcmod

35. Run clmigcheck utility on NODE1 which is still not migrated. Then press enter to continue.
clmigcheck NODE1 CAA

36. Press enter to continue for CAA cluster configuration.
CAA cluster configuration

GO to NODE2 Verify the repository disk details if it is CAA created or not.
NODE2 Verify the repository disk

37. On NODE 1 verify the CAA creation status on NODE1. Then press enter to exit.
NODE 1 verify the CAA creation status

NODE 1 verify the CAA creation status

38. Update one NODE1 PowerHA SystemMirror Enterprise Edition 7.1.3.0 & update into 7.1.3.4.
Both the NODES PowerHA SystemMirror updated into same level.

39. This is time start the cluster service on NODE1.

NODE1:# smitty clstart

NODE1 smitty clstart

40. Verify each and every status of the cluster communication on another node.

NODE2:# qha -n

Both the Cluster on JOINING MODE
Both the Cluster on JOINING MODE

Both the Cluster on BARRIER MODE
Both the Cluster on BARRIER MODE

Both the Cluster on STABLE MODE
Both the Cluster on STABLE MODE

41. Verify both the Cluster NODES cluster version if it is same the only our PowerHA SystemMirror migration successfully completed.

NODE1:# lssrc -ls clstrmgrES

NODE1 clstrmgrES

NODE2:# lssrc -ls clstrmgrES

NODE2 clstrmgrES

42. Verify cluster version on ODM level also.

NODE1+ NODE2:# odmget HACMPcluster | grep cluster_version
        cluster_version = 15
NODE1+ NODE2:# odmget HACMPnode | grep version | sort -u
        version = 15

43. Complete once again PowerHA SystemMirror Fail over test. And verify the cluster status.

NODE1+ NODE2:# clRGinfo

clRGinfo NODE1

clRGinfo NODE2

44. Verify Aix CAA Cluster unicast heart beat IP Address.

NODE1:# lscluster -m

lscluster -m NODE1 output

NODE2:# lscluster -m

lscluster -m NODE2 output

45. Some Useful Cluster Aware AIX(CAA) commands.

To list the interface information for the local node, enter

NODE1+NODE2:# lscluster -i

To list the cluster configuration

NODE1+NODE2:# lscluster -c

To list the storage interface information for the cluster

NODE1+NODE2:# lscluster -d

To list the cluster configuration for the local node

NODE1+NODE2:# lscluster -m

To list the cluster configuration for the local node

NODE1+NODE2:# lscluster -s

For collecting repository disk informations.

NODE1+NODE2:# /usr/lib/cluster/clras dumprepos -r hdisk4

46. Mping command it’s used for cluster multicast IP Heartbeats test.

To find out Multicast IP Address

NODE1+NODE2:# lscluster -c 

For example this is my PowerHA cluster Multicast Address is 228.168.1.1

For receiving Multicast Packets from NODE2.

NODE1:# mping -r 228.168.1.1

For sending multicast Packets to NODE1

NODE2:# mping -s 228.168.1.1

If you want to send 10 packets.

NODE2:# mping -s -c 10 228.168.1.1

For normal Ping to Multicast Address. we will get how IP’s are connected on this unicast network.

NODE1+NODE2:# ping 228.168.1.1

To get repository Primary & Secondary disks information

NODE1+NODE2:# clmgr query repository
=====================================================================================================

 ❓ if you have any query regarding “PowerHA SystemMirror 6.1 to 7.1.3 Rolling Migration” please send your feedback to mentioned mail.


virtualnetworkingconcepts@gmail.com

=====================================================================================================


Related posts

One Thought to “PowerHA SystemMirror 6.1 to 7.1.3 Rolling Migration”

Leave a Comment