PowerHA SystemMirror 6.1 to 7.1.3 Snapshot Migration

PowerHA SystemMirror 6.1 to 7.1.3 Snapshot Migration

Snapshot Migration From PowerHA v6.1 to v7.1.3

* Whenever we are going to perform PowerHA SystemMirror snapshot migration from 6.1 to 7.1.3.

* We have to full fill the PowerHA SystemMirror migration prerequisites.

* This document will be going to explain PowerHA SystemMirror snapshot migration.

* First we are going discuss about PowerHA SystemMirror snapshot migration prerequisites.

Snapshot Migration:

* This is not real cluster concept migration. Why because customer would remove the previous version of PowerHA SystemMirror and install the new version of PowerHA SystemMirror 7.1.3.0.

* Then configure PowerHA SystemMirror 7.1.3 interface, remaining configuration will be restoring from existing configuration snapshot through command line or smitty menu.

* At the time of migration cluster should be down.

PowerHA SystemMirror 6.1 to 7.1.3 snapshot Migration Prerequisites

1. /etc/hosts file verification.

* For example if you are having two node HACMP cluster.

* Manually configuring /etc/hosts file on cluster individual nodes.

NODE1+NODE2:# cat /etc/hosts
192.168.1.1 NODE1
192.168.1.2 NODE2
10.10.10.1 NODE1BOOT
10.10.10.2 NODE2BOOT
192.168.1.100 NODESVC

(OR) If you are configured domain make the host entry as per below example.

NODE1+NODE2:# cat /etc/hosts
192.168.1.1 NODE1.xxx.yyy.com NODE1
192.168.1.2 NODE2.xxx.yyy.com NODE2
10.10.10.1 NODE1BOOT
10.10.10.2 NODE2BOOT
192.168.1.100 NODESVC

2. Verify there is only one line that is one uncommented line in /etc/netsvc.conf file.

* Make sure take backup of /etc/netsvc.conf file remove all the entry’s form this file kept only one line as per mentioned below line.

NODE1+NODE2:#cat>/etc/netsvc.conf
hosts = local4,bind4

NODE1+NODE2:#cat /etc/netsvc.conf
hosts = local4,bind4

3. PowerHA 6.1 latest Service Pack install/update verification. Recommended level is 6.1.0.15

NODE1+NODE2:# lslpp -l | grep -i .cluster* | grep -i 6.1.0.16
cluster.es.cspoc.cmds    6.1.0.16 COMMITTED ES CSPOC Commands
cluster.es.server.events 6.1.0.16 COMMITTED ES Server Events
cluster.es.server.rte    6.1.0.16 COMMITTED ES Base Server Runtime
cluster.es.server.utils  6.1.0.16 COMMITTED ES Server Utilities
cluster.es.server.rte    6.1.0.16 COMMITTED ES Base Server Runtime
cluster.es.server.utils  6.1.0.16 COMMITTED ES Server Utilities

4. ODM cuAT inet0 hostname value and HACMPnode COMMUNICATION_PATH value verification.

* For CAA(Cluster Aware Aix) cluster we need same values for ODM cuAT inet0 hostname and HACMPnode COMMUNICATION_PATH

CuAt inet0 Persistent Hostname:

NODE1:# lsattr -El inet0 | grep -i hostname
hostname  NODE1  Host  Name  True

NODE2:# lsattr -El inet0 | grep -i hostname
hostname  NODE2  Host  Name  True

PowerHA SystemMirror COMMUNICATION_PATH:

NODE1+NODE2:# odmget HACMPnode | grep -p COMM

HACMPnode:
      name = NODE1
      object = "COMMUNICATION_PATH"
      value = 192.168.1.1
      node_id = 3
      node_handle = 3
      version = 11

HACMPnode:
     name = NODE2
     object = "COMMUNICATION_PATH"
     value = 192.168.1.2
     node_id = 4
     node_handle = 4
     version = 11

Note: In PowerHA 6.1 COMMUNICATION_PATH may be show as IP Address that is ok.

5. Create PowerHA 6.1 cluster snapshot and Aix system mksysb or alt_disk_copy backup of rootvg.

NODE1+NODE2:# alt_disk_copy -d hdisk1 -B

(or)

NODE1+NODE2:# mksysb -X /mnt/mksysb_hostname-Full-bkp

6. Cluster has no pending synchronization while still on PowerHA 6.1

NODE1+NODE2:# odmget HACMPcluster | grep handle
handle = 0

* It means if both the nodes have any un synchronized cluster changes, we have to resolve this issue immediately.

Note: If you have any difference find out the configuration changes and re synchronize the cluster.

* Since Verify/Sync cannot be performed during the migration.

7. Installation of PowrHA Migration CAA file sets and CAA/RSCT Bundles.

* Stop PowerHA on all the nodes carefully bring down resource groups carefully.

8. Take backup of PowerHA Systemmirror snapshot backup.

NODE1:# smitty hacmp
        -->Extended Configuration
            -->Snapshot Configuration
		-->Create a Snapshot of the Cluster Configuration
PowerHA systemMirror Snapshot Backup
PowerHA systemMirror Snapshot Backup

Verify snapshot backup location

NODE1+NODE2:# cd /usr/es/sbin/cluster/snapshots
NODE1+NODE2:# ls -lrt 
-rw-------    1 root     system        52343 Apr  5 21:38 latest-bkp.odm
-rw-------    1 root     system     20445224 Apr  5 21:38 latest-bkp.info

9. Install Aix/CAA file sets for PowerHA 7.1 on all nodes (if not installed)

NODE1+NODE2:# lslpp -l |  grep -i ahafs
  bos.ahafs                6.1.9.100  COMMITTED  Aha File System
  bos.ahafs                6.1.9.100  COMMITTED  Aha File System
  
NODE1+NODE2:#lslpp -l |  grep -i bos.cluster.rte
  bos.cluster.rte          6.1.9.100  COMMITTED  Cluster Aware AIX
  bos.cluster.rte          6.1.9.100  COMMITTED  Cluster Aware AIX
  
NODE1+NODE2:# lslpp -l |  grep -i devices.common.IBM.storfwork.rte
  devices.common.IBM.storfwork.rte
  devices.common.IBM.storfwork.rte
  
NODE1+NODE2:# lslpp -l |  grep -i bos.clvm.enh
  bos.clvm.enh             6.1.9.100  APPLIED    Enhanced Concurrent Logical
  bos.clvm.enh               6.1.0.0  COMMITTED  Enhanced Concurrent Logical

10. Install/update RSCT (Reliable Scalable Cluster Technology) file stets.

NODE1+NODE2:# lslpp -l |  grep -i rsct.core.rmc
  rsct.core.rmc              3.2.0.9  APPLIED    RSCT Resource Monitoring and
  rsct.core.rmc              3.2.0.9  APPLIED    RSCT Resource Monitoring and
  
NODE1+NODE2:# lslpp -l |  grep -i rsct.basic
  rsct.basic.hacmp           3.2.0.2  APPLIED    RSCT Basic Function (HACMP/ES
  rsct.basic.rte             3.2.0.7  APPLIED    RSCT Basic Function
  rsct.basic.sp              3.2.0.0  COMMITTED  RSCT Basic Function (PSSP
  rsct.msg.EN_US.basic.rte   2.5.4.0  COMMITTED  RSCT Basic Msgs - U.S. English
  rsct.msg.en_US.basic.rte   2.5.4.0  COMMITTED  RSCT Basic Msgs - U.S. English
  rsct.basic.rte             3.2.0.7  APPLIED    RSCT Basic Function
  
NODE1+NODE2:# lslpp -l |  grep -i rsct.compat.basic.hacmp
  rsct.compat.basic.hacmp    3.2.0.0  COMMITTED  RSCT Event Management Basic
  
NODE1+NODE2:# lslpp -l |  grep -i rsct.compat.clients.hacmp
  rsct.compat.clients.hacmp  3.2.0.0  COMMITTED  RSCT Event Management Client

11. If PowerHA configured to startup automatically at system boot.
Then temporarily disable the from /etc/iniitab entry.

NODE1+NODE2:# cat /etc/inittab | grep -i hacmp
cat /etc/inittab | grep -i hacmp 

NODE1+NODE2:# rmitab hacmp

12. Recommended Aix Operating level is:

AIX 6.1 TL09 SP1, or higher
(OR)
AIX 7.1 TL03 SP1, or higher

* Ensure that Virtual I/O Server ioslvel should be 2.2.0.1-FP24-SP01 or later is installed.

13. Needs to be take reboot all cluster nodes.

NODE1+NODE2:# shutdown -Fr now

14. After reboot we have to verify CAA entry’s and services

NODE1+NODE2:# egrep "caa|clusterconf" /etc/services /etc/inetd.conf /etc/inittab
/etc/services:clcomd_caa 16191/tcp
/etc/services:caa_cfg 6181/tcp
/etc/inetd.conf:caa_cfg stream tcp6 nowait root /usr/sbin/clusterconf clusterconf >>/var/adm/ras/clusterconf.log 2>&1
/etc/inittab:clusterconf:23456789:once:/usr/sbin/clusterconf

15. To check the disk reservation policy. For configuring repository disk

NODE1+NODE2:# devrsrv -c query -l hdisk3
Device Reservation State Information
============================================
Device Name : hdisk4
Device Open On Current Host? : NO
ODM Reservation Policy : NO RESERVE
Device Reservation State : NO RESERVE

If you need to be change:

NODE1+NODE2:# chdev -l hdiskX -a reserve_policy=no_reserve

* Verify if that disk previously never used for repository purpose.

NODE1+NODE2:# /usr/lib/cluster/clras dumprepos -r hdisk3

ERROR: Could not obtain repository data from hdisk3

Note: It means this disk before never used for repository purpose.

=> this means hdisk3 is OK to be used as disk repositor

16. Verify current version of the PowerHA systemMirror.

NODE1+NODE2:# halevel -s
6.1.0 SP16

17. Manually put /etc/cluster/rhosts entry on all cluster nodes.

NODE1+NODE2:# cat /etc/cluster/rhosts
NODE1
NODE2

18. Refresh the recommended system/cluster services.

NODE1+NODE2:# lssrc -ls inetd
NODE1+NODE2:# refresh -s syslogd
NODE1+NODE2:# lssrc -ls clcomd
NODE1+NODE2:# refresh -s clcomd

19. Run clmigcheck utility for PowerHA systemMirror migration prerequisites purpose.

* Select 1 for choosing PowerHA SystemMirror Version

* If there is no error on your HA configuration prerequisites. You will be get below mentioned output.
it means your HACMP Cluster ready to Migrate.

NODE1:# clmigcheck
------------[ PowerHA SystemMirror Migration Check ]-------------

Please select one of the following options:
        1 ->  Enter the version you are migrating to.
        2 ->  Check ODM configuration.
        3 ->  Check snapshot configuration.
        4 ->  Enter repository disk and IP addresses.

Select one of the above, "x" to exit, or "h" for help:  1
clmigcheck
clmigcheck

19.1 Enter the version you are migrating to.

Enter the version you are migrating to.
Enter the version you are migrating to.

19.2 Choose your PowerHA SystemMirror Version 7.1.3 .

Choose your PowerHA SystemMirror Version 7.1.3 .

19.3 Check ODM configuration.

Check ODM configuration.
Check ODM configuration.

19.4 Press Enter to remove unsupported hardware net_diskhb_01.
The ODM has no unsupported elements press enter to continue.

Press Enter to remove unsupported hardware net_diskhb_01.
Press Enter to remove unsupported hardware net_diskhb_01.

19.5 Enter repository disk and IP addresses.

Enter repository disk and IP addresses.
Enter repository disk and IP addresses.

19.6 UNICAST for unicast messaging for heartbeat

UNICAST for unicast messaging for heartbeat
UNICAST for unicast messaging for heartbeat

19.7 Chose shareable disk for repository configuration.

Chose shareable disk for repository configuration.
Chose shareable disk for repository configuration.

19.8 Repository disk has been selected for Aix CAA configuration.Verify clmigcheck.txt on both the nodes.

NODE1+NODE2:# cat /var/clmigcheck/clmigcheck.txt
CLUSTER_TYPE:STANDARD
CLUSTER_REPOSITORY_DISK:00f655b0e75bb5e7
CLUSTER_MULTICAST:UNI
NEW_VERSION:15
NEW_VERSION_STR:7.1.3

20. Now our nodes is ready for PowerHA SystemMirror software up-gradation.

First we have to uninstall old version and install new PowerHA SystemMirror Enterprise Edition 7.1 and update into latest 7.1.3.4.

NODE1+NODE2:# smitty update_all

21. Take reboot of all cluster nodes.

NODE1+NODE2:# shutdown -Fr now

22. Convert existing 6.1 snapshot backup into new version compatible 7.1.3.4 snapshot backup.

NODE1:# cd /usr/sbin/cluster/conversion

NODE1:# ./clconvert_snapshot -v 6.1 -s /usr/es/sbin/cluster/snapshots/Apr-6-2016.odm
Extracting ODM's from snapshot file... done.
Converting extracted ODM's... done.
Rebuilding snapshot file... done.

NODE1:# cd /usr/es/sbin/cluster/snapshots/
NODE1:# ls -lrt | grep -i Apr-6-2016
-rw-------    1 root     system        52343 Apr  6 22:26 latest-bkp.old
-rw-------    1 root     system       112093 Apr  6 22:26 latest-bkp.info
-rw-------    1 root     system        54558 Apr  6 22:49 latest-bkp.odm

23. Restore the cluster configuration from newly converted snapshot configuration backup.

NODE1:# smitty sysmirror
     -->Cluster Nodes and Networks
        -->Manage the Cluster
	       -->Snapshot Configuration
		  --> Restore the Cluster Configuration From a Snapshot
|--------------------------------------------------------------------|
|		             Restore the Cluster Snapshot                    |
|Type or select values in entry fields.                              |
|Press Enter AFTER making all desired changes.                       |
|                                                                    |
|                                                   [Entry Fields]   |
|  Cluster Snapshot Name                              latest-bkp     |
|  Cluster Snapshot Description                       latest-bkp     |
|  Un/Configure Cluster Resources?                    [Yes]          |
|  Force apply if verify fails?                       [No]           |
----------------------------------------------------------------------

24. Run #lspv command we will get CAA repository disk after few seconds.

NODE1+NODE2:# lspv
hdisk1          00f65ee0fd1e342f            rootvg          active
hdisk2          00f655b0b8ff0612            datavg
hdisk3          00f655b0e75bb5e7            caavg_private   active

25. This is time start the cluster service on both the nodes.

NODE1:# smitty sysmirror
        -->System Management (C-SPOC)
           -->PowerHA SystemMirror Services
              -->Start Cluster Services
|------------------------------------------------------------------------------|
|                                                                              |
|                          Start Cluster Services                              |
|                                                                              |
|Type or select values in entry fields.                                        |
|Press Enter AFTER making all desired changes.                                 |
|                                                                              |
|                                                        [Entry Fields]        |
|* Start now, on system restart or both                now                     | 
|  Start Cluster Services on these nodes              [NODE1, NODE2            | 
|* Manage Resource Groups                              Automatically           | 
|  BROADCAST message at startup?                       false                   | 
|  Startup Cluster Information Daemon?                 false                   | 
|  Ignore verification errors?                         false                   | 
|  Automatically correct errors found during           Yes                     | 
|  cluster start?                                                              |
|------------------------------------------------------------------------------|

26. Verify cluster status information.

NODE2:# lssrc -ls clstrmgrES
Current state: ST_STABLE
sccsid = "@(#)36 1.135.1.119 src/43haes/usr/sbin/cluster/hacmprd/main.C,hacmp.pe,61haes_r713,1509A_hacmp713 9/11/1"
build = "Sep 30 2015 20:26:58 1527C_hacmp713"
i_local_nodeid 1, i_local_siteid -1, my_handle 3
ml_idx[2]=0     ml_idx[3]=1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 15
local node vrmf is 7134
cluster fix level is "4"
The following timer(s) are currently active:
Current DNP values
DNP Values for NodeId - 0  NodeName - NODE1
    PgSpFree = 0  PvPctBusy = 0  PctTotalTimeIdle = 0.000000
DNP Values for NodeId - 0  NodeName - NODE2
    PgSpFree = 0  PvPctBusy = 0  PctTotalTimeIdle = 0.000000
CAA Cluster Capabilities
CAA Cluster services are active
There are 4 capabilities
Capability 0
  id: 3  version: 1  flag: 1
Hostname Change capability is defined and globally available
Capability 1
  id: 2  version: 1  flag: 1
  Unicast capability is defined and globally available
Capability 2
  id: 0  version: 1  flag: 1
  IPV6 capability is defined and globally available
Capability 3
  id: 1  version: 1  flag: 1
  Site capability is defined and globally available
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was JOIN_NODE_CO  on node 3

27. Verify Resource Group leonine status.

NODE1+NODE2:# clRGinfo
--------------------------------------------------------
Group Name                   Group State      Node
--------------------------------------------------------
RSG1                         ONLINE           NODE1
                             OFFLINE          NODE2

DISKHB                       ONLINE           NODE1
                             ONLINE           NODE2

Note: DISKHB Resource group will be remove automatically after few minutes. why because HA Version 7.1.3 not supporting net_diskhb_01.

28. Verify HA Level Version on all the cluster nodes.

NODE1:# halevel -s
7.1.3 SP4

29. Some Useful Cluster Aware AIX(CAA) commands.

To list the interface information for the local node, enter

NODE1+NODE2:# lscluster -i

To list the cluster configuration

NODE1+NODE2:# lscluster -c

To list the storage interface information for the cluster

NODE1+NODE2:# lscluster -d

To list the cluster configuration for the local node

NODE1+NODE2:# lscluster -m

To list the cluster configuration for the local node

NODE1+NODE2:# lscluster -s

For collecting repository disk informations.

NODE1+NODE2:# /usr/lib/cluster/clras dumprepos -r hdisk3

Mping command it’s used for cluster multicast IP Heartbeats test.

To find out Multicast IP Address

NODE1+NODE2:# lscluster -c

For example this is my PowerHA cluster Multicast Address is 228.168.1.1

For receiving Multicast Packets from NODE2.

NODE1:# mping -r 228.168.1.1

For sending multicast Packets to NODE1

NODE2:# mping -s 228.168.1.1

If you want to send 10 packets.

NODE2:# mping -s -c 10 228.168.1.1

For normal Ping to Multicast Address. we will get how IP’s are connected on this unicast network.

NODE1+NODE2:# ping 228.168.1.1

To get repository Primary & Secondary disks information

NODE1+NODE2:# clmgr query repository

================================================================================

 ❓ if you have any query regarding “PowerHA SystemMirror 6.1 to 7.1.3 Rolling Migration” please send your feedback to mentioned mail.

virtualnetworkingconcepts@gmail.com

================================================================================

Related posts

3 Thoughts to “PowerHA SystemMirror 6.1 to 7.1.3 Snapshot Migration”

  1. Afzal Chouhan

    it is really very useful and in detail Powerha Migration steps. Thank you

    1. Hello,

      Only task 20 need to be modified, i.e. instead of update_all in SNAPSHOT migration, we must first UN-INSTALL the PowerHA packages and then INSTALL the new PowerHA along with SP level

      Regards,
      Hrushikesh

  2. Shiju George

    Good work Keep it up !!

Leave a Comment