In HACMP 6.1 we can dynamically Add new shared file system without down the HA Cluster & Resource group.
Note :
1. Volume Group should be shareable and enhanced concurrent capable.
2. Both the nodes having the same major number shareable volume group for HA Cluster Resource group configuration.
3. No need to down the resource group and HA cluster services.
4. Volume group must be have enough free space for File System creation.
HA 6.1 Cluster Setup:
We have two node HA cluster configuration.
One node is node1 it is node1 node.
Another node is node2 it is node2 node.
it is Active/Passive cluster configuration. we have two resource group. One is ApplicationRG it used for application server. this resource group available at a time any one of the node. Another one is HeartBeatVG. it is used for Cluster Heart beat purpose. this available at a time available on both the nodes.
Adding New File System to existing HA 6.1 cluster:
Step 1 : Login to node1 HA cluster node. node name is “node1”
Two methods we can add File System to existing cluster.
Method 1 : # smitty hacmp
Method 2: command Line Interface.example: # cl_mklv, #cl_crfs etc..,
Step 2 :find out the df -gt output current file system informations.
node1:# df -gt
Filesystem GB blocks Used Free %Used Mounted on
/dev/hd4 2.00 0.47 1.53 24% /
/dev/hd2 6.00 4.37 1.63 73% /usr
/dev/hd9var 4.00 0.62 3.38 16% /var
/dev/hd3 5.00 0.64 4.36 13% /tmp
/dev/hd10opt 3.00 0.72 2.28 24% /opt
/proc - - - - /proc
/dev/hd1 5.00 0.03 4.97 1% /home
/dev/Application_lv 5.00 0.00 5.00 1% /Application
192.168.1.100:/BACKUP 150.00 141.31 8.69 95% /mnt
Step 4: run #lspv from this output get the volume group informations.
node1:# lspv
hdisk1 00f65ac0fd2cf83c rootvg node1
hdisk2 00f65ac0bc4b6d90 AppVG concurrent
hdisk3 00f65ac0bc4b6e27 AppVG concurrent
hdisk4 00f65ac0bc5486b5 HeartBeatVG concurrent
Step 5: Create new file system on running HA cluster using smitty menu.
node1:# smitty cl_mklv
+-----------------------------------------------------------------------+
| Select the Volume Group that will hold the new Logical Volume |
| |
| Move cursor to desired item and press Enter. Use arrow keys to scroll.|
| |
| #Volume Group Resource Group Node List |
| AppVG ApplicationRG node1,node2 |
| HeartBeatVG HeartbeatRG node1,node2 |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
| /=Find n=Find Next |
+-----------------------------------------------------------------------+
From above output select Resource group volume group.
Step 6: Select disk details for where you want to store the File System then press enter to continue.
+------------------------------------------------------------------+
| Select the Physical Volumes to hold the new Logical Volume|
| |
| Move cursor to desired item and press F7. |
| ONE OR MORE items can be selected. |
| Press Enter AFTER making all selections. |
| |
| # Reference Node Physical Volume Name |
| Auto-select |
| > node1 hdisk2
| node1 hdisk3
| |
| F1=Help F2=Refresh F3=Cancel |
| F7=Select F8=Image F10=Exit |
| Enter=Do /=Find n=Find Next |
+------------------------------------------------------------------+
Step 7: Fill logical volume information’s for creating new lv.
Add a Logical Volume
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP] [Entry Fie
Resource Group Name ApplicationR
VOLUME GROUP name AppVG
Node List node1,node2
Reference node node1
* Number of LOGICAL PARTITIONS [10]
PHYSICAL VOLUME names hdisk2
Logical volume NAME [test_HAlv]
Logical volume TYPE [jfs2]
POSITION on physical volume outer_middle
RANGE of physical volumes minimum
MAXIMUM NUMBER of PHYSICAL VOLUMES []
to use for allocation
Number of COPIES of each logical 1
partition
Mirror Write Consistency? node1
Allocate each logical partition copy yes
on a SEPARATE physical volume?
RELOCATE the logical volume during reorganization? yes
Logical volume LABEL []
MAXIMUM NUMBER of LOGICAL PARTITIONS [512]
Enable BAD BLOCK relocation? yes
SCHEDULING POLICY for reading/writing parallel
logical partition copies
Enable WRITE VERIFY? no
File containing ALLOCATION MAP []
Stripe Size? [Not Striped]
Serialize I/O? no
Make first block available for applications? no
Step 8: Fill required fields and press enter to continue operation.
COMMAND STATUS
Command: OK stdout: yes stderr: no
Before command completion, additional instructions may appear below.
node1: test_HAlv
Step 9: Logical volume successfully created, we can verify from below output.
node1:# lsvg -l AppVG
AppVG:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
Application_lv jfs2 10 10 1 open/syncd /Application
loglv00 jfs2log 1 1 1 open/syncd N/A
test_HAlv jfs2 10 10 1 closed/syncd N/A
Step 10: Add the File System to Application Resource Group using smitty menu.
node1:# smitty hacmp
–> System Management (C-SPOC
–>Storage
–>File System
–>Add a File System
+--------------------------------------------------------------+
| Select the Volume Group to hold the new File System|
| |
| Move cursor to desired item and press Enter. |
| |
| #Volume Group Resource Group Node List |
| AppVG ApplicationRG node1,node2 |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
| /=Find n=Find Next |
+--------------------------------------------------------------+
Choose Application Volume group then press enter to continue.
Step 11: Choose file File System type.
+--------------------------------------------------------------+
| Select the type of File System to be Added |
| |
| Move cursor to desired item and press Enter. |
| |
| Enhanced Journaled File System |
| Standard Journaled File System |
| Compressed Journaled File System |
| Large File Enabled Journaled File System |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
| /=Find n=Find Next |
+--------------------------------------------------------------+
Step 12: Chose our newly created logical volume then press enter to continue.
+-----------------------------------------------------------------------+
| Select the Logical Volume to hold the new File System |
| |
| Move cursor to desired item and press Enter. Use arrow keys to scroll.|
| |
| # To Add a new File System to volume group AppVG, |
| # you must either chose to |
| |
| Create a new Logical Volume for this File System |
| |
| # Or select an existing logical volume from the list below |
| # LV NAME TYPE LPs PPs PVs LV STATE |
| test_HAlv jfs2 10 10 1 closed/syncd |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
| /=Find n=Find Next |
+-----------------------------------------------------------------------+
Step 13: Fill the required information’s for new file system creation then press enter to continue the operation.
Add an Enhanced Journaled
File System on a Previously Defined Logical Volume
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Resource Group ApplicationRG
* Node Names node1,node2
Logical Volume name test_HAlv
Volume Group AppVG
* MOUNT POINT [/testHAF]
PERMISSIONS read/write
Mount OPTIONS []
Block Size (bytes) 4096
Inline Log? no
Inline Log size (MBytes) []
Logical Volume for Log
Extended Attribute Format Version 1
Enable Quota Management? no
Step 14: Again press enter to confirm the operation
+----------------------------------------------------------+
| ARE YOU SURE? |
| |
| Continuing may delete information you may want |
| to keep. This is your last chance to stop |
| before continuing. |
| Press Enter to continue. |
| Press Cancel to return to the application. |
| |
| F1=Help F2=Refresh F3=Cancel|
| F8=Image F10=Exit Enter=Do |
+----------------------------------------------------------+
Step 15: File system successfully created on both the nodes. ESC+0 to exit session.
Step 16: Verify if the file system created or not from below mentioned output
File system has been successfully created. file system name is /testHAF
node1:# lsvg -l AppVG
AppVG:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
Application_lv jfs2 10 10 1 open/syncd /Application
loglv00 jfs2log 1 1 1 open/syncd N/A
test_HAlv jfs2 10 10 1 open/syncd /testHAF
Step 17: Verify on node2 node from this output. file system created but it is an closed state. why because it is node2 node.
currently application resource group available on node1 node.
node2:# lsvg -l AppVG
AppVG:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
Application_lv jfs2 10 10 1 closed/syncd /Application
loglv00 jfs2log 1 1 1 closed/syncd N/A
test_HAlv jfs2 10 10 1 closed/syncd /testHAF
HA Cluster Fail over test:
from node1 verify the file system information. and resource group current status.
node1:# df -gt
Filesystem GB blocks Used Free %Used Mounted on
/dev/hd4 2.00 0.47 1.53 24% /
/dev/hd2 6.00 4.37 1.63 73% /usr
/dev/hd9var 4.00 0.62 3.38 16% /var
/dev/hd3 5.00 0.64 4.36 13% /tmp
/dev/hd10opt 3.00 0.72 2.28 24% /opt
/proc - - - - /proc
/dev/hd1 5.00 0.03 4.97 1% /home
/dev/Application_lv 5.00 0.00 5.00 1% /Application
192.168.1.100:/BACKUP 150.00 141.31 8.69 95% /mnt
/dev/test_HAlv 5.00 0.00 5.00 1% /testHAF
Currently Application Resource group available on node1 node. we have to move the resource group form node1 node to node2 node.
Cluster Name: HATESTCUSTER
Resource Group Name: HeartbeatRG
Node Group State
---------------------------- -------------
node1 ONLINE
node2 ONLINE
Resource Group Name: ApplicationRG
Node Group State
---------------------------- -------------
node1 OFFLINE
node2 ONLINE
After moving resource group newly created file system has been successfully mounted on node2 node.
So it means fail over test has been successfully done.
node2:# lsvg -l AppVG
AppVG:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
Application_lv jfs2 10 10 1 open/syncd /Application
loglv00 jfs2log 1 1 1 open/syncd N/A
test_HAlv jfs2 10 10 1 open/syncd /testHAF
Verify the file system from # df -gt output.
node2:# df -gt
Filesystem GB blocks Used Free %Used Mounted on
/dev/hd4 2.00 0.47 1.53 24% /
/dev/hd2 6.00 4.37 1.63 73% /usr
/dev/hd9var 4.00 0.62 3.38 16% /var
/dev/hd3 5.00 0.64 4.36 13% /tmp
/dev/hd10opt 3.00 0.72 2.28 24% /opt
/proc - - - - /proc
/dev/hd1 5.00 0.03 4.97 1% /home
/dev/Application_lv 5.00 0.00 5.00 1% /Application
/dev/test_HAlv 5.00 0.00 5.00 1% /testHAF
192.168.1.100:/BACKUP 150.00 141.31 8.69 95% /mnt