Wednesday, July 6, 2011

Growing Sun Cluster File System with new Disks.

Growing Sun Cluster File System with new Disks.
Setup Details:
Number of Nodes: 2
Node Name: Node1 and Node2
Cluster: Sun Cluster 3.2
OS: Solaris 9/10


I want to add 300G (100x3) SAN LUNs with one of the cluster mount point (/apps/data).


root@Node2 # df -h|grep d300
/dev/md/apps-ms/dsk/d300 295G 258G 35G 89% /apps/data

1. Add disks to both systems (shared) in SAN

2. Configure all the fiber channels on both nodes with below steps.

root@Node1 # cfgadm -al|grep fc
c4 fc-fabric connected configured unknown
c5 fc connected unconfigured unknown
c6 fc-fabric connected configured unknown
c7 fc connected unconfigured unknown
root@Node1 # cfgadm -c configure c4 c5 c6 c7


3. Run devfsadmin to configure new devices

root@Node1 #devfsadm –C

(Repeat steps 2 and 3 in all cluster nodes)
4. Run format command to list all the disks, the newly configred disk can be seen at top of format as below (if the disk not labeled already)

root@Node1 # format
Searching for disks...done
c8t6005076305FFC08C0000000000000103d0: configured with capacity of 99.98GB
c8t6005076305FFC08C0000000000000104d0: configured with capacity of 99.98GB
c8t6005076305FFC08C0000000000000120d0: configured with capacity of 99.98GB


5. Format each disk to create a partition as below.

s7 ->100mb (this 100 mb is reseverd for metadb creation. Not mandatory)
s0 -> remaining space.


6. Create corresponding cluster devices (global device path) using scgdevs command.
root@Node2 # scgdevs
Configuring DID devices
did instance 95 created.
did subpath Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000120d0 created for instance 95.
did instance 96 created.
did subpath Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000104d0 created for instance 96.
did instance 97 created.
did subpath Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000103d0 created for instance 97.
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
(above command resulted in createing d95, d96, d97 devices)

7. Confirm this devices are available on both nodes. There must be same devices with each hostname as given below.
root@Node2 # scdidadm -L|egrep 'd95|d96|d97'
95 Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000120d0 /dev/did/rdsk/d95
95 Node1:/dev/rdsk/c8t6005076305FFC08C0000000000000120d0 /dev/did/rdsk/d95
96 Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000104d0 /dev/did/rdsk/d96
96 Node1:/dev/rdsk/c8t6005076305FFC08C0000000000000104d0 /dev/did/rdsk/d96
97 Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000103d0 /dev/did/rdsk/d97
97 Node1:/dev/rdsk/c8t6005076305FFC08C0000000000000103d0 /dev/did/rdsk/d97



Following steps must be done on the system which has the ownership of this metaset (metaset -s apps-ms and confirm who is the owner)


8. Adding all the three devices with curresponding metaset (apps-ms)

root@Node2 # metaset -s apps-ms -a /dev/did/rdsk/d95 /dev/did/rdsk/d96 /dev/did/rdsk/d97
9. Attach this devices with specific meta devices (here it’s d300) using metattach command.

root@Node2 # metattach -s apps-ms d300 /dev/did/rdsk/d95s0 /dev/did/rdsk/d96s0 /dev/did/rdsk/d97s0
apps-ms/d300: components are attached


10. Confirm the devices are attached properly using below command.

root@Node2 # metastat -s apps-ms -p d300 apps-ms/d300 2 3 d6s0 d7s0 d8s0 -i 32b \
3 d95s0 d96s0 d97s0 -i 32b
11. Once the above result is confirmed, file system can be grown using below command.

root@Node2 # growfs -M /apps/data /dev/md/apps-ms/rdsk/d300
/dev/md/apps-ms/rdsk/d300: 1257996288 sectors in 76782 cylinders of 64 tracks, 256 sectors
614256.0MB in 12797 cyl groups (6 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98592, 197152, 295712, 394272, 492832, 591392, 689952, 788512, 887072,
Initializing cylinder groups:
...............................................................................
...............................................................................
...............................................................................
..................
super-block backups for last 10 cylinder groups at:
1257026336, 1257124896, 1257223456, 1257322016, 1257420576, 1257519136,
1257617696, 1257716256, 1257814816, 1257913376,
12. After successfull execution of above command, the file system has been grow. Now its around 600G.

root@Node2 # df -h|grep d300
/dev/md/apps-ms/dsk/d300 591G 258G 330G 44% /apps/data

13. Below is the corresponding logs generated in /var/adm/messages during above activity.
System Logs:
Dec 21 10:07:21 Node1 Cluster.devices.did: [ID 287043 daemon.notice] did subpath /dev/rdsk/c8t6005076305FFC08C0000000000000120d0s2 created for instance 95.
Dec 21 10:07:22 Node1 Cluster.devices.did: [ID 536626 daemon.notice] did subpath /dev/rdsk/c8t6005076305FFC08C0000000000000104d0s2 created for instance 96.
Dec 21 10:07:22 Node1 Cluster.devices.did: [ID 624417 daemon.notice] did subpath /dev/rdsk/c8t6005076305FFC08C0000000000000103d0s2 created for instance 97.
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 922726 daemon.notice] The status of device: /dev/did/rdsk/d95s0 is set to MONITORED
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 922726 daemon.notice] The status of device: /dev/did/rdsk/d96s0 is set to MONITORED
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d96s0 has changed to OK
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d95s0 has changed to OK
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 922726 daemon.notice] The status of device: /dev/did/rdsk/d97s0 is set to MONITORED
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d97s0 has changed to OK
Dec 21 10:07:39 Node1 Cluster.devices.did: [ID 466922 daemon.notice] obtaining access to all attached disks

No comments: