Wednesday, July 27, 2011

Configuration to HTML (cfg2html)

This is not a new tool, but I still want to explain a little bit about this, it is a powerful and a very useful tool. It will make system administrator’s life a lot easier and well organize. For those who never test this, I suggest you should try it. The information is complete and all inside one page. For the HP server they have their own tool called “Nickel”. But I found cfg2html is a lot faster and complete than “nickel”. What u needs to do is just copy the script to your server and run it. After few times then u can harvest the output. With here I attached what is the script can do.

Contents
System Hardware and Operating System Summary
Hardware and OS Information
showrev
Hardware Configuration (prtdiag)
Disk Device Listing
Disks
Solaris Volume Manager (SVM)
SVM Version
Status of SVM Meta Database
SVM Metadevice status
SVM Configuration (concise format)
SVM Configuration (md.tab format)
Local File Systems and Swap
Versions of /etc/vfstab
Contents of vfstab
Currently Mounted File Systems
Disk Utilization (GB)
Swap Device Listing
ZFS Configuration
ZFS Version
zpool list
zpool status
zfs list
zfs get all (defaults omitted)
NFS Configuration
Contents of dfstab
Remote file systems mounted via NFS
Local file systems shared via NFS
Local file systems mounted on remote hosts via NFS
Zone/Container Information
Zone Listing
Configuration for Zone global
Network Settings
ifconfig -a output
dladm show-dev output
Open Ports
Routing Table
nsswitch.conf
resolv.conf
Hosts file
Netmasks
NTP daemon configuraition
EEPROM
EEPROM Settings
Versions of /etc/system
Contents of /etc/system
Cron
crontabs
cron.allow
cron.deny
System Log
syslog.conf
Password and Group files
/etc/passwd
/etc/group
Software
Packages Installed
Patches Installed
Resource Limits
sysdef
ulimit -a
Projects Listing (projects -l)
Contents of /etc/project
Services
Service Listing (svcs -a)
inittab
Start-Up Script Listing
/etc/rc1.d/S10lu
/etc/rc2.d/S10lu
/etc/rc2.d/S20sysetup
/etc/rc2.d/S40llc2
/etc/rc2.d/S42ncakmod
/etc/rc2.d/S47pppd
/etc/rc2.d/S70sckm
/etc/rc2.d/S70uucp
/etc/rc2.d/S72autoinstall
/etc/rc2.d/S73cachefs.daemon
/etc/rc2.d/S76ACT_dumpscript
/etc/rc2.d/S81dodatadm.udaplt
/etc/rc2.d/S89PRESERVE
/etc/rc2.d/S90loc.ja.cssd
/etc/rc2.d/S90wbem
/etc/rc2.d/S90webconsole
/etc/rc2.d/S91afbinit
/etc/rc2.d/S91gfbinit
/etc/rc2.d/S91ifbinit
/etc/rc2.d/S91jfbinit
/etc/rc2.d/S91zuluinit
/etc/rc2.d/S94Wnn6
/etc/rc2.d/S94atsv
/etc/rc2.d/S94ncalogd
/etc/rc2.d/S95IIim
/etc/rc2.d/S98deallocate
/etc/rc2.d/S99audit
/etc/rc2.d/S99dtlogin
/etc/rc2.d/S99sneep
/etc/rc3.d/S16boot.server
/etc/rc3.d/S50apache
/etc/rc3.d/S52imq
/etc/rc3.d/S75seaport
/etc/rc3.d/S76snmpdx
/etc/rc3.d/S77dmi
/etc/rc3.d/S80mipagent
/etc/rc3.d/S81volmgt
/etc/rc3.d/S82initsma
/etc/rc3.d/S84appserv
/etc/rc3.d/S90samba
/etc/rc3.d/S92route
/etc/rcS.d/S29wrsmcfg
/etc/rcS.d/S51installupdates
Oracle
Oracle Database Instances Running
Oracle Version

Wednesday, July 6, 2011

Growing a soft partition and resizing filesystem in Solaris Volume Manager

I need to increase the filesystem called /bkp

root@solaris:~ # df -h /bkp
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d51 44G 26G 18G 60% /bkp

It’s mounted on a soft partition

root@solaris:~ # metastat d51
d51: Soft Partition
Device: d5
State: Okay
Size: 93298688 blocks (44 GB)
Extent Start Block Block count
0 20981760 10485760
1 54536288 82812928

d5: Concat/Stripe
Size: 143349312 blocks (68 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t2d0s2 0 No Okay Yes

Device Relocation Information:
Device Reloc Device ID
c1t2d0 Yes id1,sd@SSEAGATE_ST373307LSUN72G_3HZ9R8BN00007523GZY7

Here I attach a LUN to metadevice d5

root@solaris:~ # metattach d5 /dev/rdsk/emcpower33c
d5: component is attached

Now d5 have an internal disk and a LUN from storage

root@solaris:~ # metastat -p d5
d5 2 1 c1t2d0s2 \
1 /dev/dsk/emcpower33c

Here is the command to increase the soft partition

root@solaris:~ # metattach d51 10g
d51: Soft Partition has been grown

After you increase the soft partition, you need to increase the filesystem with growfs

root@solaris:~ # growfs -M /bkp /dev/md/rdsk/d51
/dev/md/rdsk/d51: Unable to find Media type. Proceeding with system determined parameters.
Warning: 5376 sector(s) in last cylinder unallocated
/dev/md/rdsk/d51: 116367360 sectors in 11436 cylinders of 24 tracks, 424 sectors
56820,0MB in 1144 cyl groups (10 c/g, 49,69MB/g, 6016 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 102224, 204416, 306608, 408800, 510992, 613184, 715376, 817568, 919760,
Initializing cylinder groups:
………………….
super-block backups for last 10 cylinder groups at:
115401920, 115504112, 115606304, 115708496, 115810688, 115912880, 116015072,
116117264, 116219456, 116321648

root@solaris:~ # df -h /bkp
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d51 55G 26G 28G 48% /bkp

Growing Sun Cluster File System with new Disks.

Growing Sun Cluster File System with new Disks.
Setup Details:
Number of Nodes: 2
Node Name: Node1 and Node2
Cluster: Sun Cluster 3.2
OS: Solaris 9/10


I want to add 300G (100x3) SAN LUNs with one of the cluster mount point (/apps/data).


root@Node2 # df -h|grep d300
/dev/md/apps-ms/dsk/d300 295G 258G 35G 89% /apps/data

1. Add disks to both systems (shared) in SAN

2. Configure all the fiber channels on both nodes with below steps.

root@Node1 # cfgadm -al|grep fc
c4 fc-fabric connected configured unknown
c5 fc connected unconfigured unknown
c6 fc-fabric connected configured unknown
c7 fc connected unconfigured unknown
root@Node1 # cfgadm -c configure c4 c5 c6 c7


3. Run devfsadmin to configure new devices

root@Node1 #devfsadm –C

(Repeat steps 2 and 3 in all cluster nodes)
4. Run format command to list all the disks, the newly configred disk can be seen at top of format as below (if the disk not labeled already)

root@Node1 # format
Searching for disks...done
c8t6005076305FFC08C0000000000000103d0: configured with capacity of 99.98GB
c8t6005076305FFC08C0000000000000104d0: configured with capacity of 99.98GB
c8t6005076305FFC08C0000000000000120d0: configured with capacity of 99.98GB


5. Format each disk to create a partition as below.

s7 ->100mb (this 100 mb is reseverd for metadb creation. Not mandatory)
s0 -> remaining space.


6. Create corresponding cluster devices (global device path) using scgdevs command.
root@Node2 # scgdevs
Configuring DID devices
did instance 95 created.
did subpath Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000120d0 created for instance 95.
did instance 96 created.
did subpath Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000104d0 created for instance 96.
did instance 97 created.
did subpath Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000103d0 created for instance 97.
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
(above command resulted in createing d95, d96, d97 devices)

7. Confirm this devices are available on both nodes. There must be same devices with each hostname as given below.
root@Node2 # scdidadm -L|egrep 'd95|d96|d97'
95 Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000120d0 /dev/did/rdsk/d95
95 Node1:/dev/rdsk/c8t6005076305FFC08C0000000000000120d0 /dev/did/rdsk/d95
96 Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000104d0 /dev/did/rdsk/d96
96 Node1:/dev/rdsk/c8t6005076305FFC08C0000000000000104d0 /dev/did/rdsk/d96
97 Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000103d0 /dev/did/rdsk/d97
97 Node1:/dev/rdsk/c8t6005076305FFC08C0000000000000103d0 /dev/did/rdsk/d97



Following steps must be done on the system which has the ownership of this metaset (metaset -s apps-ms and confirm who is the owner)


8. Adding all the three devices with curresponding metaset (apps-ms)

root@Node2 # metaset -s apps-ms -a /dev/did/rdsk/d95 /dev/did/rdsk/d96 /dev/did/rdsk/d97
9. Attach this devices with specific meta devices (here it’s d300) using metattach command.

root@Node2 # metattach -s apps-ms d300 /dev/did/rdsk/d95s0 /dev/did/rdsk/d96s0 /dev/did/rdsk/d97s0
apps-ms/d300: components are attached


10. Confirm the devices are attached properly using below command.

root@Node2 # metastat -s apps-ms -p d300 apps-ms/d300 2 3 d6s0 d7s0 d8s0 -i 32b \
3 d95s0 d96s0 d97s0 -i 32b
11. Once the above result is confirmed, file system can be grown using below command.

root@Node2 # growfs -M /apps/data /dev/md/apps-ms/rdsk/d300
/dev/md/apps-ms/rdsk/d300: 1257996288 sectors in 76782 cylinders of 64 tracks, 256 sectors
614256.0MB in 12797 cyl groups (6 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98592, 197152, 295712, 394272, 492832, 591392, 689952, 788512, 887072,
Initializing cylinder groups:
...............................................................................
...............................................................................
...............................................................................
..................
super-block backups for last 10 cylinder groups at:
1257026336, 1257124896, 1257223456, 1257322016, 1257420576, 1257519136,
1257617696, 1257716256, 1257814816, 1257913376,
12. After successfull execution of above command, the file system has been grow. Now its around 600G.

root@Node2 # df -h|grep d300
/dev/md/apps-ms/dsk/d300 591G 258G 330G 44% /apps/data

13. Below is the corresponding logs generated in /var/adm/messages during above activity.
System Logs:
Dec 21 10:07:21 Node1 Cluster.devices.did: [ID 287043 daemon.notice] did subpath /dev/rdsk/c8t6005076305FFC08C0000000000000120d0s2 created for instance 95.
Dec 21 10:07:22 Node1 Cluster.devices.did: [ID 536626 daemon.notice] did subpath /dev/rdsk/c8t6005076305FFC08C0000000000000104d0s2 created for instance 96.
Dec 21 10:07:22 Node1 Cluster.devices.did: [ID 624417 daemon.notice] did subpath /dev/rdsk/c8t6005076305FFC08C0000000000000103d0s2 created for instance 97.
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 922726 daemon.notice] The status of device: /dev/did/rdsk/d95s0 is set to MONITORED
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 922726 daemon.notice] The status of device: /dev/did/rdsk/d96s0 is set to MONITORED
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d96s0 has changed to OK
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d95s0 has changed to OK
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 922726 daemon.notice] The status of device: /dev/did/rdsk/d97s0 is set to MONITORED
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d97s0 has changed to OK
Dec 21 10:07:39 Node1 Cluster.devices.did: [ID 466922 daemon.notice] obtaining access to all attached disks