Wednesday, July 27, 2011

Configuration to HTML (cfg2html)

This is not a new tool, but I still want to explain a little bit about this, it is a powerful and a very useful tool. It will make system administrator’s life a lot easier and well organize. For those who never test this, I suggest you should try it. The information is complete and all inside one page. For the HP server they have their own tool called “Nickel”. But I found cfg2html is a lot faster and complete than “nickel”. What u needs to do is just copy the script to your server and run it. After few times then u can harvest the output. With here I attached what is the script can do.

Contents
System Hardware and Operating System Summary
Hardware and OS Information
showrev
Hardware Configuration (prtdiag)
Disk Device Listing
Disks
Solaris Volume Manager (SVM)
SVM Version
Status of SVM Meta Database
SVM Metadevice status
SVM Configuration (concise format)
SVM Configuration (md.tab format)
Local File Systems and Swap
Versions of /etc/vfstab
Contents of vfstab
Currently Mounted File Systems
Disk Utilization (GB)
Swap Device Listing
ZFS Configuration
ZFS Version
zpool list
zpool status
zfs list
zfs get all (defaults omitted)
NFS Configuration
Contents of dfstab
Remote file systems mounted via NFS
Local file systems shared via NFS
Local file systems mounted on remote hosts via NFS
Zone/Container Information
Zone Listing
Configuration for Zone global
Network Settings
ifconfig -a output
dladm show-dev output
Open Ports
Routing Table
nsswitch.conf
resolv.conf
Hosts file
Netmasks
NTP daemon configuraition
EEPROM
EEPROM Settings
Versions of /etc/system
Contents of /etc/system
Cron
crontabs
cron.allow
cron.deny
System Log
syslog.conf
Password and Group files
/etc/passwd
/etc/group
Software
Packages Installed
Patches Installed
Resource Limits
sysdef
ulimit -a
Projects Listing (projects -l)
Contents of /etc/project
Services
Service Listing (svcs -a)
inittab
Start-Up Script Listing
/etc/rc1.d/S10lu
/etc/rc2.d/S10lu
/etc/rc2.d/S20sysetup
/etc/rc2.d/S40llc2
/etc/rc2.d/S42ncakmod
/etc/rc2.d/S47pppd
/etc/rc2.d/S70sckm
/etc/rc2.d/S70uucp
/etc/rc2.d/S72autoinstall
/etc/rc2.d/S73cachefs.daemon
/etc/rc2.d/S76ACT_dumpscript
/etc/rc2.d/S81dodatadm.udaplt
/etc/rc2.d/S89PRESERVE
/etc/rc2.d/S90loc.ja.cssd
/etc/rc2.d/S90wbem
/etc/rc2.d/S90webconsole
/etc/rc2.d/S91afbinit
/etc/rc2.d/S91gfbinit
/etc/rc2.d/S91ifbinit
/etc/rc2.d/S91jfbinit
/etc/rc2.d/S91zuluinit
/etc/rc2.d/S94Wnn6
/etc/rc2.d/S94atsv
/etc/rc2.d/S94ncalogd
/etc/rc2.d/S95IIim
/etc/rc2.d/S98deallocate
/etc/rc2.d/S99audit
/etc/rc2.d/S99dtlogin
/etc/rc2.d/S99sneep
/etc/rc3.d/S16boot.server
/etc/rc3.d/S50apache
/etc/rc3.d/S52imq
/etc/rc3.d/S75seaport
/etc/rc3.d/S76snmpdx
/etc/rc3.d/S77dmi
/etc/rc3.d/S80mipagent
/etc/rc3.d/S81volmgt
/etc/rc3.d/S82initsma
/etc/rc3.d/S84appserv
/etc/rc3.d/S90samba
/etc/rc3.d/S92route
/etc/rcS.d/S29wrsmcfg
/etc/rcS.d/S51installupdates
Oracle
Oracle Database Instances Running
Oracle Version

Wednesday, July 6, 2011

Growing a soft partition and resizing filesystem in Solaris Volume Manager

I need to increase the filesystem called /bkp

root@solaris:~ # df -h /bkp
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d51 44G 26G 18G 60% /bkp

It’s mounted on a soft partition

root@solaris:~ # metastat d51
d51: Soft Partition
Device: d5
State: Okay
Size: 93298688 blocks (44 GB)
Extent Start Block Block count
0 20981760 10485760
1 54536288 82812928

d5: Concat/Stripe
Size: 143349312 blocks (68 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t2d0s2 0 No Okay Yes

Device Relocation Information:
Device Reloc Device ID
c1t2d0 Yes id1,sd@SSEAGATE_ST373307LSUN72G_3HZ9R8BN00007523GZY7

Here I attach a LUN to metadevice d5

root@solaris:~ # metattach d5 /dev/rdsk/emcpower33c
d5: component is attached

Now d5 have an internal disk and a LUN from storage

root@solaris:~ # metastat -p d5
d5 2 1 c1t2d0s2 \
1 /dev/dsk/emcpower33c

Here is the command to increase the soft partition

root@solaris:~ # metattach d51 10g
d51: Soft Partition has been grown

After you increase the soft partition, you need to increase the filesystem with growfs

root@solaris:~ # growfs -M /bkp /dev/md/rdsk/d51
/dev/md/rdsk/d51: Unable to find Media type. Proceeding with system determined parameters.
Warning: 5376 sector(s) in last cylinder unallocated
/dev/md/rdsk/d51: 116367360 sectors in 11436 cylinders of 24 tracks, 424 sectors
56820,0MB in 1144 cyl groups (10 c/g, 49,69MB/g, 6016 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 102224, 204416, 306608, 408800, 510992, 613184, 715376, 817568, 919760,
Initializing cylinder groups:
………………….
super-block backups for last 10 cylinder groups at:
115401920, 115504112, 115606304, 115708496, 115810688, 115912880, 116015072,
116117264, 116219456, 116321648

root@solaris:~ # df -h /bkp
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d51 55G 26G 28G 48% /bkp

Growing Sun Cluster File System with new Disks.

Growing Sun Cluster File System with new Disks.
Setup Details:
Number of Nodes: 2
Node Name: Node1 and Node2
Cluster: Sun Cluster 3.2
OS: Solaris 9/10


I want to add 300G (100x3) SAN LUNs with one of the cluster mount point (/apps/data).


root@Node2 # df -h|grep d300
/dev/md/apps-ms/dsk/d300 295G 258G 35G 89% /apps/data

1. Add disks to both systems (shared) in SAN

2. Configure all the fiber channels on both nodes with below steps.

root@Node1 # cfgadm -al|grep fc
c4 fc-fabric connected configured unknown
c5 fc connected unconfigured unknown
c6 fc-fabric connected configured unknown
c7 fc connected unconfigured unknown
root@Node1 # cfgadm -c configure c4 c5 c6 c7


3. Run devfsadmin to configure new devices

root@Node1 #devfsadm –C

(Repeat steps 2 and 3 in all cluster nodes)
4. Run format command to list all the disks, the newly configred disk can be seen at top of format as below (if the disk not labeled already)

root@Node1 # format
Searching for disks...done
c8t6005076305FFC08C0000000000000103d0: configured with capacity of 99.98GB
c8t6005076305FFC08C0000000000000104d0: configured with capacity of 99.98GB
c8t6005076305FFC08C0000000000000120d0: configured with capacity of 99.98GB


5. Format each disk to create a partition as below.

s7 ->100mb (this 100 mb is reseverd for metadb creation. Not mandatory)
s0 -> remaining space.


6. Create corresponding cluster devices (global device path) using scgdevs command.
root@Node2 # scgdevs
Configuring DID devices
did instance 95 created.
did subpath Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000120d0 created for instance 95.
did instance 96 created.
did subpath Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000104d0 created for instance 96.
did instance 97 created.
did subpath Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000103d0 created for instance 97.
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
(above command resulted in createing d95, d96, d97 devices)

7. Confirm this devices are available on both nodes. There must be same devices with each hostname as given below.
root@Node2 # scdidadm -L|egrep 'd95|d96|d97'
95 Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000120d0 /dev/did/rdsk/d95
95 Node1:/dev/rdsk/c8t6005076305FFC08C0000000000000120d0 /dev/did/rdsk/d95
96 Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000104d0 /dev/did/rdsk/d96
96 Node1:/dev/rdsk/c8t6005076305FFC08C0000000000000104d0 /dev/did/rdsk/d96
97 Node2:/dev/rdsk/c8t6005076305FFC08C0000000000000103d0 /dev/did/rdsk/d97
97 Node1:/dev/rdsk/c8t6005076305FFC08C0000000000000103d0 /dev/did/rdsk/d97



Following steps must be done on the system which has the ownership of this metaset (metaset -s apps-ms and confirm who is the owner)


8. Adding all the three devices with curresponding metaset (apps-ms)

root@Node2 # metaset -s apps-ms -a /dev/did/rdsk/d95 /dev/did/rdsk/d96 /dev/did/rdsk/d97
9. Attach this devices with specific meta devices (here it’s d300) using metattach command.

root@Node2 # metattach -s apps-ms d300 /dev/did/rdsk/d95s0 /dev/did/rdsk/d96s0 /dev/did/rdsk/d97s0
apps-ms/d300: components are attached


10. Confirm the devices are attached properly using below command.

root@Node2 # metastat -s apps-ms -p d300 apps-ms/d300 2 3 d6s0 d7s0 d8s0 -i 32b \
3 d95s0 d96s0 d97s0 -i 32b
11. Once the above result is confirmed, file system can be grown using below command.

root@Node2 # growfs -M /apps/data /dev/md/apps-ms/rdsk/d300
/dev/md/apps-ms/rdsk/d300: 1257996288 sectors in 76782 cylinders of 64 tracks, 256 sectors
614256.0MB in 12797 cyl groups (6 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98592, 197152, 295712, 394272, 492832, 591392, 689952, 788512, 887072,
Initializing cylinder groups:
...............................................................................
...............................................................................
...............................................................................
..................
super-block backups for last 10 cylinder groups at:
1257026336, 1257124896, 1257223456, 1257322016, 1257420576, 1257519136,
1257617696, 1257716256, 1257814816, 1257913376,
12. After successfull execution of above command, the file system has been grow. Now its around 600G.

root@Node2 # df -h|grep d300
/dev/md/apps-ms/dsk/d300 591G 258G 330G 44% /apps/data

13. Below is the corresponding logs generated in /var/adm/messages during above activity.
System Logs:
Dec 21 10:07:21 Node1 Cluster.devices.did: [ID 287043 daemon.notice] did subpath /dev/rdsk/c8t6005076305FFC08C0000000000000120d0s2 created for instance 95.
Dec 21 10:07:22 Node1 Cluster.devices.did: [ID 536626 daemon.notice] did subpath /dev/rdsk/c8t6005076305FFC08C0000000000000104d0s2 created for instance 96.
Dec 21 10:07:22 Node1 Cluster.devices.did: [ID 624417 daemon.notice] did subpath /dev/rdsk/c8t6005076305FFC08C0000000000000103d0s2 created for instance 97.
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 922726 daemon.notice] The status of device: /dev/did/rdsk/d95s0 is set to MONITORED
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 922726 daemon.notice] The status of device: /dev/did/rdsk/d96s0 is set to MONITORED
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d96s0 has changed to OK
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d95s0 has changed to OK
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 922726 daemon.notice] The status of device: /dev/did/rdsk/d97s0 is set to MONITORED
Dec 21 10:07:22 Node1 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d97s0 has changed to OK
Dec 21 10:07:39 Node1 Cluster.devices.did: [ID 466922 daemon.notice] obtaining access to all attached disks

Wednesday, June 29, 2011

OS Watcher (OSW) and Lite Onboard Monitor (LTOM)

The following four new white papers have just been released by Oracle's Center of Expertise:

10g Upgrade Companion

Determining CPU Resource Usage for Linux and Unix

Measuring Memory Usage for Linux and Unix

Best Practices for Load Testing
I checked the second and third white papers, both of which are written by Roger Snyde from Oracle Support's Center of Expertise. These white papers describe a tool called OSW (OS Watcher) . Oracle Support’s Center of Expertise has developed OSWatcher, a script-based tool for Unix and Linux systems that runs and archives output from a number of operating system monitoring utilities, such as vmstat, top, iostat, mpstat and ps.

OSWatcher is available from Metalink as note 301137.1. It is a shell script tool and will run on Unix and Linux servers. It operates as a background process and runs the native operating system utilities at user-settable intervals, by default 30 seconds, and retains an archive of the output for a user settable period, defaulting to 48 hours. This value may be increased in order to retain more information when evaluating performance, and to capture baseline information during important cycle-end periods.

Oracle recommends customers download and install OSWatcher on all production and test servers that need to be monitored.

While going through 301137.1, I found the mention of another tool called LTOM(The embedded Lite Onboard Monitor): To collect database metrics in addition to OS metrics consider running LTOM. The Lite Onboard Monitor (LTOM) is a java program designed as a real-time diagnostic platform for deployment to a customer site. LTOM differs from other support tools, as it is proactive rather than reactive. LTOM provides real-time automatic problem detection and data collection. LTOM runs on the customer's UNIX server, is tightly integrated with the host operating system and provides an integrated solution for detecting and collecting trace files for system performance issues. The ability to detect problems and collect data in real-time will hopefully reduce the amount of time it takes to solve problems and reduce customer downtime.

Both OSW and LTOM now provide a graphing utility to graph the data collected. This greatly reduces the need to manually inspect all the output files.


OSWatcher:
used OSWatcher to monitor CPU/Memory/Network to investigate the problem on servers. I think It's easy to setup. But I have to download it from metalink.

OS Watcher (OSW) is a collection of UNIX shell scripts intended to collect and archive operating system and network metrics to aid support in diagnosing performance issues. We can download it from metalink. OSW operates as a set of background processes on the server and gathers OS data on a regular basis, invoking such Unix utilities as vmstat, netstat and iostat.
More detail: metalink: 301137.1

After I downloaded it from metalink. It's time to setup:

$ ls osw212.tar
osw212.tar

$ tar xvf osw212.tar
./
./osw/
./osw/Exampleprivate.net
./osw/OSWatcher.sh
./osw/OSWatcherFM.sh
./osw/profile/
./osw/oswnet.sh
./osw/oswsub.sh
./osw/startOSW.sh
./osw/stopOSW.sh
./osw/tarupfiles.sh
./osw/topaix.sh
./osw/README
./osw/OSWgREADME
./osw/src/
./osw/src/OSW_profile.htm
./osw/src/coe_logo.gif
./osw/src/oswg_input.txt
./osw/src/missing_graphic.gif
./osw/src/tombody.gif
./osw/src/watch.gif
./osw/gif/
./osw/oswlnxtop.sh
./osw/private.net
./osw/oswlnxio.sh
./osw/oswg.jar
./osw/tmp/

$ cd osw


Just extract from tar file... read README file and get idea with utility commands:

startOSW.sh script:
need 2 arguments which control the frequency that data is collected and the number of hours worth of data to archive.
An optional 3rd argument allows the user to specify a zip utility name to compressthe files after they have been created:

ARG1 = snapshot interval in seconds (default 30 seconds).
ARG2 = the number of hours of archive data to store (default 48 hours)
ARG3 (optional) = the name of the zip utility to run if the user wants to compress the files automatically after creation.

Example:

./startOSW.sh
Info...You did not enter a value for snapshotInterval.
Info...Using default value = 30
Info...You did not enter a value for archiveInterval.
Info...Using default value = 48
.
.

./startOSW.sh 60 10 gzip
Info...Zip option IS specified.
Info...OSW will use gzip to compress files.
.
.
Starting OSWatcher V2.1.2 on Tue Jul 21 11:16:40 ICT 2009
With SnapshotInterval = 60
With ArchiveInterval = 10
.
.


stopOSW.sh script:

Example:

./stopOSW.sh

Or use "OSWatcher.sh" run to test:


$ ./OSWatcher.sh

Info...You did not enter a value for snapshotInterval.
Info...Using default value = 30
Info...You did not enter a value for archiveInterval.
Info...Using default value = 48

Testing for discovery of OS Utilities...

VMSTAT found on your system.
IOSTAT found on your system.
MPSTAT found on your system.
NETSTAT found on your system.
TOP found on your system.

Discovery completed.

Starting OSWatcher V2.1.2 on Tue Jul 21 10:29:55 ICT 2009
With SnapshotInterval = 30
With ArchiveInterval = 48

OSWatcher - Written by Carl Davis, Center of Expertise, Oracle Corporation

Starting Data Collection...

osw heartbeat:Tue Jul 21 10:29:55 ICT 2009
.
.

CTRL+C


It's time to show it (TEST): Starting

$ ./startOSW.sh
Info...You did not enter a value for snapshotInterval.
Info...Using default value = 30
Info...You did not enter a value for archiveInterval.
Info...Using default value = 48

Testing for discovery of OS Utilities...

VMSTAT found on your system.
IOSTAT found on your system.
MPSTAT found on your system.
NETSTAT found on your system.
TOP found on your system.

Discovery completed.

Starting OSWatcher V2.1.2 on Tue Jul 21 10:34:24 ICT 2009
With SnapshotInterval = 30
With ArchiveInterval = 48

OSWatcher - Written by Carl Davis, Center of Expertise, Oracle Corporation

Starting Data Collection...

osw heartbeat:Tue Jul 21 10:34:24 ICT 2009
osw heartbeat:Tue Jul 21 10:34:55 ICT 2009
osw heartbeat:Tue Jul 21 10:35:25 ICT 2009
.
.
.
monitor!... and want to stop:

$ ./stopOSW.sh
Terminated


What I see?

Archives 're stored in osw/archive/ PATH.

$ find ./archive/ -type f
./archive/oswiostat/oratest01_iostat_09.07.21.1000.dat
./archive/oswslabinfo/oratest01_slabinfo_09.07.21.1000.dat
./archive/oswprvtnet/oratest01_prvtnet_09.07.21.1000.dat
./archive/oswps/oratest01_ps_09.07.21.1000.dat
./archive/oswtop/oratest01_top_09.07.21.1000.dat
./archive/oswvmstat/oratest01_vmstat_09.07.21.1000.dat
./archive/oswmeminfo/oratest01_meminfo_09.07.21.1000.dat
./archive/oswnetstat/oratest01_netstat_09.07.21.1000.dat
./archive/oswmpstat/oratest01_mpstat_09.07.21.1000.dat
.
.
.


From Archive Files, that can see stats. and use archives to make graph as well:
use OSWg(more detail: metalink 461053.1) generate graph (requires as a minimum java version 1.4.2 or higher), and need X-windows.

read OSWgREADME File to help generate Graph.
and test with some archives:

$ $ORACLE_HOME/jdk/bin/java -version
java version "1.4.2_14"

$ $ORACLE_HOME/jdk/bin/java -jar oswg.jar -i archive/

Starting OSWg V2.1.2
OSWatcher Graph Written by Oracle Center of Expertise
Copyright (c) 2008 by Oracle Corporation

Parsing Data. Please Wait...

Parsing file oratest01_iostat_09.07.21.1000.dat ...
Parsing file oratest01_iostat_09.07.21.1100.dat ...
.
.
.

Parsing Completed.

Enter 1 to Display CPU Process Queue Graphs
Enter 2 to Display CPU Utilization Graphs
Enter 3 to Display CPU Other Graphs
Enter 4 to Display Memory Graphs
Enter 5 to Display Disk IO Graphs

Enter 6 to Generate All CPU Gif Files
Enter 7 to Generate All Memory Gif Files
Enter 8 to Generate All Disk Gif Files

Enter L to Specify Alternate Location of Gif Directory
Enter T to Specify Different Time Scale
Enter D to Return to Default Time Scale
Enter R to Remove Currently Displayed Graphs
Enter P to Generate A Profile
Enter Q to Quit Program

Please Select an Option:5

The Following Devices and Average Service Times Are Ready to Display:

Device Name Average Service Times in Milliseconds

sda 2.0477464788732385
sdb 1.192676056338029

Specify A Case Sensitive Device Name to View (Q to exit): sda

Tuesday, June 28, 2011

Custom Logrotate in Solaris 10

Here I explain how to configure logadm to rotage any system wide files according to given criteria.
1. Add the corresponding entries in /etc/logadm.conf in below format.
root@server1 # tail -3 /etc/logadm.conf
/var/adm/wtmpx -A 1m -o adm -g adm -m 664 -p 1d -t '$file.old.%Y%m%d_%H%M' -z 1
/var/adm/wtmpx -A 1m -g adm -m 664 -o adm -p 1w -t '$file.old.%Y%m%d_%H%M' -z 5
/var/adm/utmpx -A 1m -g adm -m 664 -o adm -p 1w -t '$file.old.%Y%m%d_%H%M' -z 5
/var/adm/loginlog -A 1m -g sys -m 700 -o root -p 1w -t '$file.old.%Y%m%d_%H%M' -z 5
Explanation for each switch:
-A ->Delete any versions that have not been modified for the amount of time specified by age. Specify age as a number followed by an h (hours), d (days), w(weeks), m (months), or y (years).
-o -> the owner of the newly creating empty file
-g-> the group of newly creating file
-m ->mode of the new empty file (chmod xxx)
-p -> Rotate a log file after the specified time period (period as d, w, m, y)
-t -> Specify the template to use when renaming log files (Here, wtmpx.old.20101225_0757) (see man logadm for more info)
-z ->How many copy of rotaged files needs to retain on the system.
-P ->Used by logadm to record the last time the log was rotated in /etc/logadm.conf (no need to set this manually)
2. Once above entries are done, execute logadm -v command to run a logrotation now. Now logadm reads the /etc/logadm.conf file, and for every entry found in that file checks the corresponding log file to see if it should be rotated.
root@server1 # logadm -v
# loading /etc/logadm.conf
# processing logname: /var/log/syslog
# using default rotate rules: -s1b -p1w
# using default template: $file.$n
# processing logname: /var/adm/messages
# using default rotate rules: -s1b -p1w
# using default template: $file.$n
# processing logname: /var/cron/log
# using default expire rule: -C10
# processing logname: /var/lp/logs/lpsched
# using default rotate rules: -s1b -p1w
# processing logname: /var/fm/fmd/errlog
# using default expire rule: -C10
# using default template: $file.$n
# processing logname: /var/fm/fmd/fltlog
# using default template: $file.$n
# processing logname: smf_logs
# using default template: $file.$n
# processing logname: /var/adm/pacct
# using default template: $file.$n
# processing logname: /var/log/pool/poold
# using default expire rule: -C10
# using default template: $file.$n
# processing logname: /var/svc/log/system-webconsole:console.log
# using default rotate rules: -s1b -p1w
# using default expire rule: -C10
# using default template: $file.$n
# processing logname: /var/opt/SUNWsasm/log/sasm.log
# using default template: $file.$n
# processing logname: /var/adm/wtmpx
mkdir -p /var/adm # verify directory exists
mv -f /var/adm/wtmpx /var/adm/wtmpx.old.20101225_1250 # rotate log file
touch /var/adm/wtmpx
chown adm:adm /var/adm/wtmpx
chmod 664 /var/adm/wtmpx
# recording rotation date Sat Dec 25 12:50:51 2010 for /var/adm/wtmpx
# processing logname: /var/adm/utmpx
mkdir -p /var/adm # verify directory exists
mv -f /var/adm/utmpx /var/adm/utmpx.old.20101225_1250 # rotate log file
touch /var/adm/utmpx
chown adm:adm /var/adm/utmpx
chmod 664 /var/adm/utmpx
# recording rotation date Sat Dec 25 12:50:51 2010 for /var/adm/utmpx
# processing logname: /var/adm/loginlog
mkdir -p /var/adm # verify directory exists
mv -f /var/adm/loginlog /var/adm/loginlog.old.20101225_1250 # rotate log file
touch /var/adm/loginlog
chown root:sys /var/adm/loginlog
chmod 700 /var/adm/loginlog
# recording rotation date Sat Dec 25 12:50:51 2010 for /var/adm/loginlog
# writing changes to /etc/logadm.conf
As you can see the last line of above command, once the logadm command successfully run, it do some changes to with -P switch in /etc/logadm.conf file regarding the last update of logrotation.
root@server1 # tail -3 /etc/logadm.conf
/var/adm/wtmpx -A 1m -P 'Sat Dec 25 12:50:51 2010' -g adm -m 664 -o adm -p 1w -t '$file.old.%Y%m%d_%H%M' -z 5
/var/adm/utmpx -A 1m -P 'Sat Dec 25 12:50:51 2010' -g adm -m 664 -o adm -p 1w -t '$file.old.%Y%m%d_%H%M' -z 5
/var/adm/loginlog -A 1m -P 'Sat Dec 25 12:50:51 2010' -g sys -m 700 -o root -p 1w -t '$file.old.%Y%m%d_%H%M' -z 5
List of new files created in /var/adm
root@server1 # ls -ltr /var/adm/*.old*
-rwx------ 1 root sys 0 Dec 25 11:00 /var/adm/loginlog.old.20101225_1250
-rw-r--r-- 1 root bin 3720 Dec 25 15:49 /var/adm/utmpx.old.20101225_1250
-rw-rw-r-- 1 adm adm 8595060 Dec 25 15:51 /var/adm/wtmpx.old.20101225_1250

Sunday, March 13, 2011

Sun Cluster console

Sun Cluster Console - How it simplifies life even if you don't use Sun Cluster
Recently I am working with about 6 Sun Fire V440 connected to many 3510s and one of the pains is doing repetitive steps on all the 6 servers.

Enter Sun Cluster Console (part of Sun Java Enterprise System).

Basically what you need is SUNWcconsol package which after installation is available at /opt/SUNWcluster.

You have to try out /opt/SUNWcluster/bin/ctelnet.

After making sure I had the right display permissions/settings, I did

$ ctelnet v440-1 v440-2 v440-3 v440-4 v440-5 v440-6 &
It pops up 6 terminal screens and a small window containing a text box and few menu options . When I type in the small box it gets typed in all the 6 terminals. This way I can log into all the servers at the same time and do repetitive steps on all of them simultaneously saving me tremendous amount of time.

You can use the menu options to temporarily not type into any specific host and reset it back to normal later on. A tremendous time saver when you have to setup multiple boxes say for GRID or CLUSTER or HPC or DB2 DPF.

crlogin and cconsole are also available.

Wednesday, February 23, 2011

Performance Collection Script for Solaris 10

I had the need to collect a bunch of system statistics on Solaris 10 servers during a performance test. I wanted to get these statistics at a much more frequent basis than I have sar configured for and I also wanted to include some scripts that I have found useful for collecting other performance statistics. So, I wrote a quick script to use during the test. One script that I plugged into mine is one written by Brendan Gregg. It’s called “nicstat” - it collects performance statistics for network interfaces. It can be dowloaded from http://www.brendangregg.com/Perf/network.html#nicstat

To use:
1) Download the script from here.http://sunblog.mbrannigan.com/collect.tgz
1) Unzip the collect.tgz archive with gtar.
2) Put a copy of Brendan Gregg’s nicstat script into the collect subdirectory.
3) Run the collect.sh script.

Results:
When the script first starts up, it will create a subdirectory of the output directory named after the system you are on. After this, the script will loop, collecting various statistics during its execution and storing the results in the directory it created. Currently, the script will collect the following statistics:
• netstat -an
• nicstat
• A list of TCP sessions in the ESTABLISHED state
• A count of TCP sessions in the ESTABLISHED state (based on SRC and DEST IPs)
• A list of TCP sessions in the TIME_WAIT state
• A count of TCP sessions in the TIME_WAIT state (based on SRC and DEST IPs)
• netstat -i
• TCP statistics from netstat -s
• I/O statistics from iostat -xnz
• Memory / CPU statistics from vmstat
• System event activity from vmstat -s
• Paging activity from vmstat -p
• Swap activity from vmstat -S

How to stop it:
The script will sleep for 5 minutes and then append to the end of the various files that it creates. To stop collection, simply press Ctrl-C. The snooze time between collections can be changed by modifying the SNOOZE parameter. It is currently configured to snooze 300 seconds (5 minutes).

Changing a disk label (EFI / SMI)

I had inserted a drive into a V440 and after running devfsadm, I ran format on the disk. I was presented with the following partition table:

partition> p
Current partition table (original):
Total disk sectors available: 143358320 + 16384 (reserved sectors)

Part Tag Flag First Sector Size Last Sector
0 usr wm 34 68.36GB 143358320
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 143358321 8.00MB 143374704

This disk was used in a zfs pool and, as a result, uses an EFI label. The more familiar label that is used is an SMI label (8 slices; numbered 0-7 with slice 2 being the whole disk). The advantage of the EFI label is that it supports LUNs over 1TB in size and prevents overlapping partitions by providing a whole-disk device called cxtydz rather than using cxtydzs2.

However, I want to use this disk for UFS partitions. This means I need to get it back the SMI label for the device. Here’s how it’s done:

# format -e
...
partition> label
[0] SMI Label
[1] EFI Label
Specify Label type[1]: 0
Warning: This disk has an EFI label. Changing to SMI label will erase all
current partitions.
Continue? y
Auto configuration via format.dat[no]?
Auto configuration via generic SCSI-2[no]?
partition> q
...
format> q
#

Running format again will show that the SMI label was placed back onto the disk:

partition> p
Current partition table (original):
Total disk cylinders available: 14087 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 25 129.19MB (26/0/0) 264576
1 swap wu 26 - 51 129.19MB (26/0/0) 264576
2 backup wu 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 52 - 14086 68.10GB (14035/0/0) 142820160
7 unassigned wm 0 0 (0/0/0) 0

Monday, February 7, 2011

DMX Configuration Options

I've been looking at DMX configuration options this week. Essentially the question is how best to lay out a DMX-3 or DMX-4 array with a tiered configuration. For me there are two options and it's pretty clear which I prefer. First a little background. The following diagram shows the way DMX drives are deployed within a fully configured array. The array is divided into "quadrants", splitting each drive bay into two.

Back-end directors (BED) provide connectivity to the drives as represented by the colour scheme. There are up to 4 BED pairs available for a full configuration.
Option 1 - Dedicated Quadrants


One option is to dedicate quadrants to a workload or tier. For example tier 1 storage gets given quadrant 1. Theoretically this should provide this tier with uncontended back-end bandwidth as all the tier 1 storage will reside in the same location. What it doesn't do is let tier 1 storage utilise unused bandwidth on the other BEDs, which as the array scales, may prove to be a problem.

Option 2 - Mixed Workload

In this option, disks are spread across the whole array, perhaps placing tier 1 disks first followed by tier 2 devices. In this way, the I/O load is spread across the whole configuration. As new disks are added, they are distributed throughout the array, keeping performance even. The risk with this configuration lies in whether tier 2 storage will affect tier 1, as the array becomes busy. This can be mitigated with Cache partitioning and LUN prioritisation options.
I prefer the second option when designing arrays, unless there is a very good reason to segment workload. Distributing disks gives a better overall performance balance, reducing the risk of fragmenting (and consequently wasting) resources. I would also use the same methodology for other enterprise arrays too.

Bear in mind if you choose to use Enterprise Flash Drives (EFDs) that they can only be placed in the first storage bays either side of the controller bay and with a limit of 32 per quadrant. Mind you, if you can afford more than 32 drives then you've probably paid for your onsite EMC support already!!

There's also the question of physical space. As the drives are loaded into the array, if only a small number of them are tier 1, then potentially cabinet space is wasted. Either that or the configuration has to be build in an unbalanced fashion, perhaps placing more lower tier storage to the right of the array, using the expansion BEDs.

The second diagram shows how an unbalanced array could look - tier 2 devices on the left and right are loaded at different quantities and so lead to an unbalanced layout.

DMX Configuration Options

Essentially the question is how best to lay out a DMX-3 or DMX-4 array with a tiered configuration. For me there are two options and it's pretty clear which I prefer. First a little background. The following diagram shows the way DMX drives are deployed within a fully configured array. The array is divided into "quadrants", splitting each drive bay into two. Back-end directors (BED) provide connectivity to the drives as represented by the colour scheme. There are up to 4 BED pairs available for a full configuration.

How Many IOPS? Enterprise class arrays

"How many IOPS can my RAID group sustain?" in relation to Enterprise class arrays.

Obviously the first question is to determine what the data profile is, however if it isn't known, then assume the I/O will be 100% random. If all the I/O is random, then each I/O request will require a seek (move the head to the right cylinder on the disk) and the disk to rotate to the start of the area to read (latency) which for 15K drives is 2ms. Taking the latest Seagate Cheetah 15K fibre channel drives, each drive has an identical seek time of 3.4ms for reads. This is a total time of 5.4ms, or 185 IOPS (1000/5.4). The same calculation for a Seagate SATA drive gives a worst case throughput of 104 IOPS, approximately half the capacity of the fibre channel drive.

For a RAID group of RAID-5 3+1 fibre channel drives, data will be spread across all 4 drives, so this RAID group has a potential worst case I/O throughput of 740 IOPS.

Clearly this is a "rule of thumb" as in practical terms, not every I/O will be completely random and incur the seek/latency penalties. Also, enterprise arrays have cache (the drives themselves have cache) and plenty of clever algorithms to mask the issues of the moving technology.

There are also plenty of other points of contention within the host->array stack which makes this whole subject more complicated, however, when comparing different drive speeds, calculating a worst case scenario gives a good indication of how differing drives will perform.

Incidentally, as I just mentioned, the latest Seagate 15K drives (146GB, 300GB and 460GB) all have the same performance characteristics, so tiering based on drive size isn't that useful. The only exception to this is when a high I/O throughput is required. With smaller drives, data has to be spread across more spindles, increasing the available bandwidth. That's why I think tiering should be done on drive speed not size

Tuesday, January 25, 2011

Oracle 10g Installation on Solaris 10 Kernel Parameter

Setup the Solaris Kernel

In Solaris 10, you are not required to make changes to the /etc/system file to implement the System V TPC. Solaris 10 uses the resource control facility for its implementation.
Parameter Resource Control Recommended Value
noexec_user_stack NA 1
semsys:seminfo_semmni project.max-sem-ids 100
semsys:seminfo_semmsl process.max-sem-nsems 256
shmsys:shminfo_shmmax project.max-shm-memory 4294967295
shmsys:shminfo_shmmni project.max-shm-ids 100

Many kernel parameters have been replaced by so called resource controls in Solaris 10. It is possible to change resource controls using the prctl command. All shared memory and semaphore settings are now handled via resource controls, so any entries regarding shared memory or semaphores (shm & sem) in /etc/system will be ignored.

Here is the procedure we followed to modify the kernel parameters on Solaris 10 / Oracle 10.2.0.3.

Unlike earlier releases of Solaris, most of the system parameters needed to run Oracle are already set properly, so the only one you need is the maximum shared memory parameter. In earlier versions this was called SHMMAX and was set by editing the /etc/system file and rebooting. With Solaris 10 you set this by modifying a «Resource Control Value». You can do this temporarily by using prctl, but that is lost at reboot so you will need to add the command to the oracle user's $HOME/.profile.

The other option is to create a default project for the oracle user.

# projadd -U oracle -K "project.max-shm-memory=(priv,4096MB,deny)" user.oracle

What this does:

* Makes a project named "user.oracle" in /etc/project with the user oracle as it's only member.

# cat /etc/project

system:0::::
user.root:1::::
noproject:2::::
default:3::::
group.staff:10::::
user.oracle:100::oracle::project.max-shm-memory=(priv,4294967296,deny)

* Because the name was of the form "user.username" it becomes the oracle user's default project.

* The value of the maximum shared memory is set to 4GB, you might want to use a larger value here if you have more memory and swap.

* No reboot is needed, the user will get the new value
at their next login.

Now you can also modify the max-sem-ids Parameter:

# projmod -s -K "project.max-sem-ids=(priv,256,deny)" user.oracle

Check the Parameters as User oracle

$ prctl -i project user.oracle

project: 100: user.oracle
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-contracts
privileged 10.0K - deny -
system 2.15G max deny -
project.max-device-locked-memory
privileged 125MB - deny -
system 16.0EB max deny -
project.max-port-ids
privileged 8.19K - deny -
system 65.5K max deny -
project.max-shm-memory
privileged 4.00GB - deny -
system 16.0EB max deny -
project.max-shm-ids
privileged 128 - deny -
system 16.8M max deny -
project.max-msg-ids
privileged 128 - deny -
system 16.8M max deny -
project.max-sem-ids
privileged 256 - deny -
system 16.8M max deny -
project.max-crypto-memory
privileged 498MB - deny -
system 16.0EB max deny -
project.max-tasks
system 2.15G max deny -
project.max-lwps
system 2.15G max deny -
project.cpu-shares
privileged 1 - none -
system 65.5K max none -
zone.max-lwps
system 2.15G max deny -
zone.cpu-shares
privileged 1 - none -

Create Unix Group «dba»

$ groupadd -g 400 dba
$ groupdel dba

Create Unix User «oracle»

$ useradd -u 400 -c "Oracle Owner" -d /export/home/oracle \
-g "dba" -m -s /bin/ksh oracle

Setup ORACLE environment ($HOME/.bash_profile) as follows

# Setup ORACLE environment

ORACLE_HOME=/opt/oracle/product/10.2.0; export ORACLE_HOME
ORACLE_SID=QUO1; export ORACLE_SID
TNS_ADMIN=/home/oracle/config/10.2.0 export TNS_ADMIN
ORA_NLS10=${ORACLE_HOME}/nls/data; export ORA_NLS10
CLASSPATH=${CLASSPATH}:${ORACLE_HOME}/jdbc/lib/classes12.zip
ORACLE_TERM=xterm; export ORACLE_TERM
ORACLE_OWNER=oracle; export ORACLE_OWNER
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1; export NLS_LANG
LD_LIBRARY_PATH=/usr/lib:${ORACLE_HOME}/lib:${ORACLE_HOME}/lib32; export LD_LIBRARY_PATH

# Set up the search paths

PATH=/usr/local/bin:/usr/local/sbin:/sbin:/usr/sbin:/usr/sfw/sbin
PATH=$PATH:/usr/bin:/usr/ccs/bin:/usr/openwin/bin:/usr/sadm/bin
PATH=$PATH:/usr/sfw/bin:/usr/X11/bin:/usr/j2se/bin
PATH=$PATH:$ORACLE_HOME/bin

Install Oracle Software

To extract the installation archive files, perform the following steps:

$ gunzip filename.cpio.gz
$ cpio -idcmv < filename.cpio Check oraInst.loc File If you used Oracle before on your system, then you must edit the Oracle Inventory File, usually located in: /var/opt/oracle/oraInst.loc Install with Installer in interactive mode Install Oracle 10g with Oracle Installer $ DISPLAY=:0.0 $ export DISPLAY $ ./runInstaller Edit the Database Startup Script /var/opt/oracle/oratab QUO1:/opt/oracle/product/10.2.0:Y Create Password File If the DBA wants to start up an Oracle instance there must be a way for Oracle to authenticate this DBA. That is if he is allowed to do so. Obviously, his password can not be stored in the database, because Oracle can not access the database if the instance has not been started up. Therefore, the authentication of the DBA must happen outside of the database. The init parameter remote_login_passwordfile specifies if a password file is used to authenticate the DBA or not. If it set either to shared or exclusive a password file will be used. Default location and file name The default location for the password file is: $ORACLE_HOME/dbs/orapw$ORACLE_SID Deleting a password file If password file authentication is no longer needed, the password file can be deleted and the init parameter remote_login_passwordfile set to none. Password file state If a password file is shared or exclusive is also stored in the password file. After its creation, the state is shared. The state can be changed by setting remote_login_passwordfile and starting the database. That is, the database overwrites the state in the password file when it is started up. A password file whose state is shared can only contain SYS. Creating a password file Password files are created with the orapwd tool. $ orapwd file=orapwQUO1 password=manager entries=5 force=y Create a Symbolic Link from $ORACLE_HOME/dbs to the Password. Create the Database Edit the CREATE DATABASE File initQUO1.ora and create a symbolic-Link from $ORACLE_HOME/dbs to your Location. $ cd $ORACLE_HOME/dbs $ ln -s /home/oracle/config/10.2.0/initQUO1.ora initQUO1.ora $ ls -l lrwxrwxrwx 1 oracle dba 39 Jun 5 12:55 initQUO1.ora -> /home/oracle/config/10.2.0/initQUO1.ora
lrwxrwxrwx 1 oracle dba 36 Jun 5 12:58 orapwQUO1 -> /home/oracle/config/10.2.0/orapwQUO1

First start the Instance, just to test your initQUO1.ora file for correct syntax and system resources.

$ cd /export/home/oracle/config/10.2.0/
$ sqlplus /nolog
SQL> connect / as sysdba
SQL> startup nomount
SQL> shutdown immediate

Now you can create the Database

SQL> @initQUO1.sql
SQL> @shutdown immediate
SQL> startup

Check the Logfile: initQUO1.log

Start Listener

$ lsnrctl start LSNRQUO1

Automatically Start / Stop the Database

Solaris 10 has introduced the Solaris Service Management Facility to start / stop Services.

Services that are started by traditional rc scripts (referred to as legacy services) will generally continue to work as they always have. They will show up in the output of svcs(1), with an FMRI based on the pathname of their rc script, but they can not be controlled by svcadm(1M). They should be stopped and started by running the rc script directly.

$ svcs | grep oracle

legacy_run 8:27:00 lrc:/etc/rc3_d/S99oracle

To start the Database automatically on Boot-Time, create or use our Startup Scripts oracle which must be installed in /etc/init.d. Create symbolic Links from the Startup Directories.

lrwxrwxrwx 1 root root S99oracle -> ../init.d/oracle

Thursday, January 20, 2011

Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 1 of 62
Table of Contents
1 Introduction .......................................................................................................................................................2
2 Setup Descriptions ..............................................................................................................................................2
3 Hardware Stack..................................................................................................................................................3
4 Software Stack....................................................................................................................................................3
5 What Is Provided With This Installation?............................................................................................................3
5.1 Number of Oracle RAC Nodes ........................................................................................................................... 4
5.2 Clusterware ....................................................................................................................................................... 4
5.3 Redundancy for Private Interfaces ...................................................................................................................... 4
5.4 Redundancy for I/O Paths.................................................................................................................................. 4
5.5 Volume Manager............................................................................................................................................... 4
5.6 Cluster File System (QFS) ................................................................................................................................. 4
5.7 Latest Software Versions ................................................................................................................................... 5
6 Pre-installation Requirements.............................................................................................................................5
6.1 Firmware Update............................................................................................................................................... 5
6.2 Setup Information.............................................................................................................................................. 5
6.3 Configuring the Storage..................................................................................................................................... 5
6.4 Documentation and Installation Software............................................................................................................ 5
6.4.1 Documents ..................................................................................................................................................... 6
6.4.2 Installation Software....................................................................................................................................... 6
7 Sun Software Installation....................................................................................................................................6
7.1 Installing the Solaris OS .................................................................................................................................... 6
7.2 Installing Cluster Control Panel in the Administrative Console............................................................................ 8
7.3 Installing Packages for Sun Cluster Framework and Data Service for Oracle RAC .............................................. 9
7.5 Creating a Cluster............................................................................................................................................ 10
7.6 Preparing for Oracle UDLM Package Installation.............................................................................................. 13
7.7 Installing the UDLM Package.......................................................................................................................... 13
7.8 Configuring RAC Framework and Solaris Volume Manager Resource .............................................................. 14
7.9 Configuring a Solaris Volume Manager Metaset and Metadevices..................................................................... 15
7.10 Configuring QFS........................................................................................................................................... 17
7.11 UNIX Pre-installation Steps........................................................................................................................... 20
7.12 Using the Oracle Universal Installer for Real Application Clusters .................................................................. 30
8 Preparing for Oracle RAC Installation ..............................................................................................................30
9 Installing Oracle Clusterware (CRS)..................................................................................................................31
10 Installing the Oracle RDBMS Server ...............................................................................................................32
11 Creating a Database........................................................................................................................................34
12 Adding 3rd node to RAC.................................................................................................................................35
13. Removing a node from Sun Cluster 3.2 ...........................................................................................................45
13.1 UNIX Command History executed during node removal ................................................................................ 54
14 Appendix A: Server Information Table............................................................................................................61
15 Appendix B: Storage Information Table ..........................................................................................................62
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 2 of 62
1 Introduction
This document is a detailed step-by-step guide for installing the Solaris 10 10/08
(s10s_u6wos_07b SPARC) Operating System, Solaris Cluster 3.2 software, the QFS 4.6
cluster file system, and Oracle 10g Release 2 Real Application Clusters (Oracle 10gR2
RAC). This document also provides detailed instructions on how to configure QFS and
Solaris Volume Manager so they can be used with Oracle RAC.
This document uses a two-node setup to show the installation process, but the same
procedure can be used for setups with a different number of nodes and different hardware
components.
2 Setup Descriptions
The setup used in this installation was very simple: A Sun M5000 Server was subdomained
in two domains. Each domain was used as a node for the two-node Sun Cluster
software and Oracle RAC installation. SAN Storage 9900 was connected to a fibre
channel (FC) switch and used for shared storage.
Figure 1: Diagram of the pracdb Setup
Fibre Channel Switch
SAN 9990 3 luns of
33GB
Administrative
Console Solaris 10 OS
Gbit Ethernet
Fiber Channel
Public Network
pracdb02 (domain 1)
pracdb01 (domain 0)
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 3 of 62
3 Hardware Stack
Table 1 identifies the hardware stack used in this project.
Table 1: Hardware Stack
Attribute Node 1 Node 2
Server Server Sun M5000 Server
(domain A)
Server Sun M5000 Server
(domain B)
CPU SPARC64-VII 2.4GHz 2cpu x
8core each
SPARC64-VII 2.4GHz 2cpu x
8core each
RAM 32 GB 32 GB
Host Channel Adaptor (HCA) 1 dual-port HCA 2Gbps 1 dual-port HCA 2Gbps
Ethernet ports bge0, bge1 nxge0, nxge1,nxge2 bge0, bge1 nxge0, nxge1,nxge2
Storage Sun StorEdge 9990
4 Software Stack
Table 2 describes the software stack used in this project.
Table 2: Software Stack
Role Vendor Product Version
Operating System Sun Microsystems Solaris 10 10/08 s10s_u6wos_07b SPARC
Database Server Oracle Oracle RAC RDBMS server 10.2.0.2 SPARC 64-
bit
Clusterware Sun Microsystems Sun Cluster 3.2 software
Cluster File System Sun Microsystems QFS 4.6
Volume Manager Sun Microsystems Solaris Volume Manager, which is part of the
Solaris 10 OS
5 What Is Provided With This Installation?
The installation procedures described here achieve a complete Oracle RAC installation.
This installation addresses the needs of most Oracle RAC installations on the Solaris OS.
In addition, it leverages the Solaris OS stack using Sun Cluster 3.2 software, Solaris
Volume Manager, and QFS, eliminating the need for any third-party products. This
section discusses the different aspects of the installation, what is provided, and how to
modify the installation to obtain RAS (Reliability, Availability and Serviceability)
features.
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 4 of 62
5.1 Number of Oracle RAC Nodes
The current setup has only two nodes, but up to eight /sixteen nodes can be used without
any modification. Verification has been done by adding and removing third node
(pracdb03).
5.2 Clusterware
Sun Cluster 3.2 software is the cluster solution and it is integrated well with the Solaris
OS. Sun Cluster software can bring more robustness to Oracle RAC by providing many
advantages over other cluster solutions.
5.3 Redundancy for Private Interfaces
This setup provides redundancy for private interfaces because it uses Sun Cluster
software. Sun Cluster software requires at least two separate paths for the private
interface, and it automatically manages failover and load balancing across the different
paths. If more than two nodes are used, then two Ethernet switches are required to avoid a
single point of failure. The current setup uses GbE.
5.4 Redundancy for I/O Paths
The setup configuration presented here does not provide redundant paths to the storage.
To provide fully redundant paths, each node needs two HCA cards with each connected
to a different FC switch, and there needs to be connections from each storage array to
each FC switch through different RAID controllers. The procedures described here would
not change if any level of I/O redundancy is introduced in the setup because I/O
multipathing (MPxIO) is enabled through these procedures and the Solaris OS hides any
complexity introduced by redundancy in the I/O paths. Regardless of the level of
redundancy, the Solaris OS and the Sun Cluster software always present one device for
each shared device, and the failover mechanism is handled automatically by the Solaris
OS.
5.5 Volume Manager
In this setup, Solaris Volume Manager is configured so that raw devices in the shared
storage can be used to create metadevices, which can be used for data files. Even though
Solaris Volume Manager is part of the Solaris OS, it can be used for Oracle RAC only if
Sun Cluster software is managing the cluster.
5.6 Cluster File System (QFS)
These procedures also configure and provide a cluster file system that can be used to
store data files or any other files. QFS is a generic file system in which any kind of file
can be stored. QFS cannot be used for Oracle RAC unless Sun Cluster software is
installed and is managing the cluster.
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 5 of 62
5.7 Latest Software Versions
This setup delivers the latest software stack currently certified and publicly available: the
Solaris 10 10/08 (s10s_u6wos_07b SPARC) for SPARC platforms, Sun Cluster 3.2
software, Oracle 10g Release 2 Real Application Clusters, and QFS 4.6.
6 Pre-installation Requirements
Before starting the installation procedures, ensure you complete the steps outlined in this
section.
6.1 Firmware Update
Update the firmware version of all your hardware components for a Solaris 10 10/08
(s10s_u6wos_07b SPARC) OS installation, including storage arrays, FC switches, PCI
cards, and system controllers.
6.2 Setup Information
Create tables for your setup with similar information to the server information table and
storage information table presented in Appendix A and Appendix B. You will need all
this information during the installation and it is better to plan it all before starting your
installation.
6.3 Configuring the Storage
Map the logical unit numbers (LUNs) to the controllers according to the storage
information table you created. Make sure you can see all the LUNs in all the nodes. If
you see a LUN more than once in the same system, it is because you have redundant
paths and MPxIO is not yet enabled. This will be resolved later.
Try to leverage the RAID controllers by creating RAID0+1 or RAID5 LUNs. If you want
to eliminate single points of failure, map your LUNs to both controllers in the storage
array and connect each controller to a different FC switch. Also, connect to each FC
switch from a different host bus adapter (HBA) in each server, so that the HBA does not
become a single point of failure.
6.4 Documentation and Installation Software
This section lists the documentation that is referenced through this document. It also lists
all the software that you need to use during the installation process. Obtain all the
installation software listed here before starting the installation.
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 6 of 62
6.4.1 Documents
The following documents are referenced throughout the installation procedures:
• System Administration Guide: IP Services:
• Sun Cluster Software Installation Guide for Solaris OS:
• Sun JavaTM Enterprise System 5 Installation Guide for UNIX:
• Sun Cluster Data Service for Oracle RAC Guide for Solaris OS:
• Sun StorEdge QFS Installation and Upgrade Guide:
• Sun Cluster System Administration Guide for Solaris OS:
• http://download.oracle.com/docs/cd/B19306_01/rac.102/b14197/toc.htm
• Oracle® Database Oracle Clusterware and Oracle Real Application Clusters
Administration and Deployment Guide 10g Release 2 (10.2) Part Number
B14197-10
6.4.2 Installation Software
The following software is installed:
Solaris 10 10/08 (s10s_u6wos_07b SPARC)
Sun Java Availability suite for SPARC (Solaris Cluster)
(suncluster-3_2-ga-solaris-sparc.zip or suncluster-3_2-ga-solaris=x86.zip), available for
download at sun.com
Oracle patch 5389391 with the 3.3.4.9 UDLM for SPARC platforms
(p5389391_10202_SOLARIS64.zip), available for download at the Oracle Metalink web
site (https://metalink.oracle.com/) and provided in the installation kit
QFS packages for SPARC (StorEdge_QFS_4.6_sparc.iso), available for download at
sun.com
The following patches, available from the SunSolve web site, which are needed for
official support of QFS 4.6 on Sun Cluster 3.2 software:
122807-05, if you are installing on the Solaris OS for SPARC platforms
Oracle installation software, available at oracle.com:
(http://www.oracle.com/index.html)
7 Sun Software Installation
7.1 Installing the Solaris OS
Note: This section builds on procedures in the Sun Cluster Software Installation Guide
for Solaris OS.
1. Install Solaris 10 10/08 Solaris 10 10/08 (s10s_u6wos_07b SPARC) (entire
software group) on all the servers. Make sure that you create slices in the boot
disk, as described by the server information table in Appendix A of this
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 7 of 62
document. These slices will be needed for Sun Cluster software and other
software components.
Do not install any Solaris patches at this point.
2. Enable MPxIO:
a) After the Solaris OS is installed on all nodes, execute as root the following
command on all nodes:
/usr/sbin/stmboot -e
The nodes reboot.
b) On all nodes (one at the time) reboot using the following command:
boot -- -r
c) Verify that on each node you see one and only one path to each of the LUNs
presented by the storage arrays. If that is not the case, solve the problem before
going any further.
3. Configure /.rhosts so that all nodes and the administrative console (admin console) can
use rsh as root among themselves:
root@pracdb01 # hostname
pracdb01
4. Configure shared memory.
On each node, add the following line to /etc/system and then reboot each node:
set shmsys:shminfo_shmmax=desired_SGA_size_in_bytes
For example:
root@pracdb01 # tail /etc/system
* http://www.sun.com/blueprints/0404/817-6925.pdf
* set ipge:ipge_taskq_disable=1
* set ce:ce_taskq_disable=1
* End of lines added by SUNWscr
* BEGIN SUNWsamfs file system addition
* DO NOT EDIT above line or the next 2 lines below...
forceload: fs/samfs
• END SUNWsamfs file system addition
• set shmsys:shminfo_shmmax=10737418240
The shmmax setting provides a system-wide limit of 10GB shm segment size. The
Database Configuration Assistant (DBCA) reads this value to calculate the maximum
possible System Global Area (SGA) size.
5. Make the installation software available to all the nodes. Select one node on which to
place the software installation files in /oracleRac/software. Place all the installation
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 8 of 62
software in /oracleRac/software and share /oracleRac/software through NFS. Mount the
installation software in /oracleRac/software on all the other nodes.
6. Install the following patch, available from the SunSolve web site, for official support
of QFS 4.6 on Sun Cluster 3.2 software:
122807-05, if you are installing on the Solaris OS for SPARC platform
7.2 Installing Cluster Control Panel in the Administrative Console
Note: This procedure follows information in the Sun Cluster Software Installation Guide
for Solaris OS.
This procedure describes how to install the Cluster Control Panel (CCP) software on an
admin console. The CCP provides a single interface from which to start the
cconsole(1M), ctelnet(1M), and crlogin(1M) tools. Each tool provides a multiple-window
connection to a set of nodes, as well as a common window. You can use the common
window to send input to all nodes at one time.
1. Become superuser on the admin console.
2. Unzip the Sun Java Availability Suite in /stage:
Unzip the file suncluster-3_2u1-ga-solaris-sparc.tar.bz2
/u01/sun3.2
3. Change to the following directory:
/u01/sun3.2
4. Install the SUNWccon package:
adminconsole# pkgadd -d . SUNWccon
5. (Optional) Install the SUNWscman package:
adminconsole# pkgadd -d . SUNWscman
When you install the SUNWscman package on the admin console, you can view
Sun Cluster man pages from the admin console before you install Sun Cluster software
on the cluster nodes.
6. Create an /etc/clusters file on the admin console:
Add your cluster name and the physical node name of each cluster node to the
file:
adminconsole# vi /etc/clusters
clustername pracdb01 pracdb02
See the /opt/SUNWcluster/bin/clusters(4) man page for details.
7. Create an /etc/serialports file:
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 9 of 62
Add an entry for each node in the cluster to the file. Specify the physical node
name, the host name of the console-access device, and the port number. Examples of a
console-access device are a terminal concentrator (TC), a System Service Processor
(SSP), and a Sun Fire system controller. Make sure the consoles are configured for telnet
and not for ssh access.
adminconsole# vi /etc/serialports
pracdb01 ca-dev-hostname port
pracdb02 ca-dev-hostname port
pracdb01, pracdb02 (Physical names of the cluster nodes)
ca-dev-hostname (Host name of the console-access device)
port (Serial port number)
8. (Optional) For convenience, set the directory paths on the admin console:
a) Add the /opt/SUNWcluster/bin/ directory to the PATH.
b) Add the /opt/SUNWcluster/man/ directory to the MANPATH.
c) If you installed the SUNWscman package, also add the /usr/cluster/man/
directory to the MANPATH.
9. Start the CCP utility:
adminconsole# /opt/SUNWcluster/bin/ccp &
Click the cconsole, crlogin, or ctelnet button in the CCP window to launch that tool.
Alternatively, you can start any of these tools directly. For example, to start ctelnet, type
the following command:
adminconsole# /opt/SUNWcluster/bin/ctelnet &
See the procedure "How to Log In to Sun Cluster Remotely" in the "Beginning to
Administer the Cluster" section of the Sun Cluster System Administration Guide for
Solaris OS for additional information about how to use the CCP utility. Also see the
ccp(1M) man page.
7.3 Installing Packages for Sun Cluster Framework and Data Service for Oracle
RAC
Repeat this procedure sequentially on each node. This procedure installs the packages for
Sun Cluster framework and the data service for Oracle RAC.
1. Ensure that the display environment of the cluster node is set to display the GUI on the
admin console:
a) On the admin console, execute:
# xhost +
b) On the cluster node, execute:
# setenv DISPLAY adminconsole:0.0
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 10 of 62
Note: If you do not make these settings, the installer program runs in text-based
mode.
2. Become superuser on the cluster node.
3. Start the installation wizard program:
# cd /u01/sun3.2/Solaris_sparc
# ./installer
4. Select only Sun Cluster 3.2 and the agent for Sun Cluster support for Oracle RAC.
5. Select all shared resources and accept all the defaults in the next screens of the
installer. The installer cannot configure Sun Cluster software, so it displays a message
about it this. You can safely ignore this message.
6. Make sure the directory /usr/cluster exists after the installation is completed. If it does,
proceed with the next node. If it does not, the Sun Cluster software was not installed
correctly. In such a case, see the Sun Java Enterprise System 5 Installation Guide for
UNIX.
7.4 Installing QFS Packages
Note: This procedure follows information in the Sun StorEdge QFS Installation and
Upgrade Guide.
1. Place the QFS 4.6 installation software in /u01/app/StorageTek_QFS_4[1].6.iso
2. Become superuser in the first (or next) cluster node.
3. mount iso mage file as cdrom drive
4. Use the pkgadd(1M) command to add the SUNWqfsr and SUNWqfsu packages.
# pkgadd -d . SUNWqfsr SUNWqfsu
5. Enter yes or y as the answer to each of the questions.
6. Repeat steps 2 through 5 for the next cluster node.
7.5 Creating a Cluster
Note: This procedure follows information in the Sun Cluster Software Installation Guide
for Solaris OS and the Sun Cluster Data Service for Oracle RAC Guide for Solaris OS.
1. Unplumb all the communication devices (NICs) that the Sun Cluster software will use
for the private interface. For each device in each node use the command ifconfig devicename
down unplumb, and ensure the following:
Make sure that this is a dedicated subnet and that no network traffic is present.
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 11 of 62
Make sure the IP address and netmask 172.16.0.0/255.255.248.0 do not conflict
with the other networks in the lab. If they do, select another network that does not
conflict.
Make sure that no /etc/hostname.dev-name files exist for the communication
devices that will be used by the Sun Cluster software for the private interface. The
Sun Cluster software owns these devices and takes care of plumbing and
configuring them.
2. Run /usr/cluster/bin/scinstall from the node you want to be named last (pracdb02). By
running scinstall in pracdb02, you guarantee that pracdb02 will be recognized as node 2
in the cluster. This follows the same naming convention as the one used for the host
names assigned to the nodes.
3. Select the option to create a new cluster.
4. Select a custom installation.
5. When asked for the other nodes, provide them in the reverse order to that which you
want them named. The last one you provide become node 1 in the cluster.
6. Provide the name of the communication devices to use in each node for private
interface, and provide the switch to which they are connected. (If there are only two
nodes, a point-to-point connection can be used instead of a switch.) This information
should be in the server information table you already filled out.
7. Allow for automatic quorum selection.
8. Provide the installer with the name of the raw device representing the slice in the boot
disk of each node where the global devices file system (/globaldevices) will be created.
This information should be in the server information table you already filled out.
After a successful installation, all nodes reboot and join the cluster after they boot up.
9. Determine which device was selected for quorum device using the following
command:
# root@pracdb01 # /usr/cluster/bin/clquorum show
=== Cluster Nodes ===
Node Name: pracdb01
Node ID: 1
Quorum Vote Count: 1
Reservation Key: 0x4A6EC47800000001
Node Name: pracdb02
Node ID: 2
Quorum Vote Count: 1
Reservation Key: 0x4A6EC47800000002
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 12 of 62
=== Quorum Devices ===
Quorum Device Name: d5
Enabled: yes
Votes: 2
Global Name: /dev/did/rdsk/d5s2
Type: scsi
Access Mode: scsi3
Hosts (enabled): pracdb01, pracdb02
10. (Optional) Change the quorum device.
The device selected by the Sun Cluster software for quorum device can be used
for any other purpose, since Sun Cluster software does not use any of its
cylinders. If you want to move the quorum device to another shared device
(maybe for RAS or performance reasons), use the following commands:
# /usr/cluster/bin/clquorum add dx (where dx is the global ID of new the device)
# /usr/cluster/bin/clquorum remove dx (where dx is the global device reported by Sun
Cluster software)
To find how shared storage maps to Sun Cluster global devices use the following
command:
root@pracdb01 # /usr/cluster/bin/scdidadm -l
1 pracdb01:/dev/rdsk/c5t60060E80042D0A0000002D0A00000084d0 /dev/did/rdsk/d1
2 pracdb01:/dev/rdsk/c0t3d0 /dev/did/rdsk/d2
3 pracdb01:/dev/rdsk/c5t5000C5000FCCB363d0 /dev/did/rdsk/d3
4 pracdb01:/dev/rdsk/c5t5000C5000FCE46FBd0 /dev/did/rdsk/d4
5 pracdb01:/dev/rdsk/c5t60060E80042D0A0000002D0A00000064d0 /dev/did/rdsk/d5
6 pracdb01:/dev/rdsk/c5t60060E80042D0A0000002D0A00000074d0 /dev/did/rdsk/d6
11. Verify the quorum list and status:
a) Run the following command, which should return a list with all the nodes and
the quorum device in it:
root@pracdb01 # /usr/cluster/bin/clquorum list
d5
pracdb01
pracdb02
b) Verify the status of the quorum device to make sure that the cluster has a
working quorum setup, where each node has a vote and the quorum has a vote
too.
# /usr/cluster/bin/clquorum status
=== Cluster Quorum ===
--- Quorum Votes Summary ---
Needed Present Possible
----------- ---------- -----------
2 3 3
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 13 of 62
--- Quorum Votes by Node ---
Node Name Present Possible Status
---------------- ------------ ----------- ---------
pracdb02 1 1 Online
pracdb01 1 1 Online
--- Quorum Votes by Device ---
Device Name Present Possible Status
------------------ ------------ ------------ ---------
d5 1 1 Online
12. To configure Network Time Protocol (NTP) to synchronize time among all cluster
nodes, do the following on all nodes and then reboot them:
# cp /etc/inet/ntp.conf /etc/inet/ntp.conf.orig
# cp /etc/inet/ntp.cluster /etc/inet/ntp.conf
13. Bypass Network Information Service (NIS) name service to allow proper operation of
the data service for Oracle RAC.
On each node, modify the following entries in the /etc/nsswitch.conf file:
passwd: files nis [TRYAGAIN=0]
group: files nis [TRYAGAIN=0]
publickey: files nis [TRYAGAIN=0]
project: files nis [TRYAGAIN=0]
7.6 Preparing for Oracle UDLM Package Installation
Now that the cluster has been created and the data service for Oracle RAC is in place, the
Oracle UDLM package needs to be installed. This package is the "connector" between
Oracle RAC and the Sun Cluster software. You will install only the UDLM package for
10.2.0.2. The UDLM package is usually included in the Oracle tarball for Oracle
Clusterware (formerly called CRS for "Cluster Ready Services").
/oracleRac/software/cluster/racpatch
7.7 Installing the UDLM Package
The Oracle Unix Distributed Lock Manager (ORCLudlm also known as the Oracle Node
Monitor) must be installed. This may be referred to in the Oracle documentation as the
"Parallel Server Patch". To check version information on any previously installed dlm
package:
$ pkginfo -l ORCLudlm |grep PSTAMP
OR
$ pkginfo -l ORCLudlm |grep VERSION
Perform the following procedure on each node, one at the time.
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 14 of 62
Note: This procedure follows information in the UDLM readme file.
1. Unpack the file p7715304_10204_Solaris-64.zip into the
/oracleRac/software/patch directory.
2. Install the patch by adding the package as root:
# cd /oracleRac/software/patch
# pkgadd -d . ORCLudlm
7.8 Configuring RAC Framework and Solaris Volume Manager Resource
The RAC framework and Solaris Volume Manager can be configured with command line
interface (CLI) commands or with an interactive text menu using clsetup. This document
uses CLI commands, but if you want to use the menu driven process, you can follow the
directions in the Sun Cluster Data Service for Oracle RAC Guide for Solaris OS.
Note: This procedure follows information in the Sun Cluster Data Service for Oracle
RAC Guide for Solaris OS.
1. Become superuser on any of the nodes.
2. Create a scalable resource group.
# clresourcegroup create -s prac-fmwk-rg
3. Register the SUNW.rac_framework resource group.
# clresourcetype register SUNW.rac_framework
4. Add an instance of the SUNW.rac_framework resource type to the resource group just
created.
# clresource create -g prac-fmwk-rg-t SUNW.rac_framework prac-fmwk-rs
5. Register the SUNW.rac_udlm resource type.
# clresourcetype register SUNW.rac_udlm
6. Add an instance of the SUNW.rac_udlm resource type to the resource group just
created.
# clresource create -g my-resource-group \
-t SUNW.rac_udlm \
-p resource_dependencies=prac-fmwk-rs prac-udlm-rs
7. Register and add an instance of the resource type that represents the Solaris Volume
Manager.
# clresourcetype register SUNW.rac_svm
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 15 of 62
8. Add an instance of the SUNW.rac_svm resource type to the resource group just
created.
# clresource create -g prac-fmwk-rg\
-t SUNW.rac_svm \
-p resource_dependencies=prac-fmwk-rs prac-svm-rs
9. Bring online and in a managed state the RAC framework resource group and its
resources.
# clresourcegroup online emM prac-fmwk-rg
10. Verify that the resource group and the resources are online.
root@pracdb01 # scstat -g
-- Resource Groups and Resources --
Group Name Resources
---------- ---------
Resources: qfs-rg qfs-res
Resources: prac-fmwk-rg prac-fmwk-rs prac-udlm-rs
-- Resource Groups --
Group Name Node Name State Suspended
---------- --------- ----- ---------
Group: qfs-rg pracdb01 Online No
Group: qfs-rg pracdb02 Offline No
Group: prac-fmwk-rg pracdb01 Online No
Group: prac-fmwk-rg pracdb02 Online No
-- Resources --
Resource Name Node Name State Status Message
------------- --------- ----- --------------
Resource: qfs-res pracdb01 Online Online - Service is online.
Resource: qfs-res pracdb02 Offline Offline
Resource: prac-fmwk-rs pracdb01 Online Online
Resource: prac-fmwk-rs pracdb02 Online Online
Resource: prac-udlm-rs pracdb01 Online Online
Resource: prac-udlm-rs pracdb02 Online Online
7.9 Configuring a Solaris Volume Manager Metaset and Metadevices
Note: This procedure follows information in the Sun Cluster Data Service for Oracle
RAC Guide for Solaris OS.
1. Create a multi-owner disk set (metaset) from one node.
Solaris Volume Manager allows the creation of metadevices. To use these
metadevices for Oracle RAC, a metaset needs to be created first. You need to
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 16 of 62
include all the nodes in the cluster in this command and the name you want to
give to the metaset (prac in this example). For the pracdb setup, the command is:
# metaset -s prac -M -a -h pracdb01 pracdb02
2. Add raw devices to the metaset.
You can now add raw devices in the shared storage to this metaset, and then
create metadevices with them. Look at the server information table to identify the
devices you want to use with Solaris Volume Manager. Use the global device ID
(DID) instead of the raw device name since DIDs are invariant across the cluster.
For example, pracdb01 presents the following global devices:
# /usr/cluster/bin/scdidadm -l
3 pracdb01:/dev/rdsk/c1t50020F2300002B39d1 /dev/did/rdsk/d3
4 pracdb01:/dev/rdsk/c1t50020F2300002B39d0 /dev/did/rdsk/d4
5 pracdb01:/dev/rdsk/c0t1d0 /dev/did/rdsk/d5
6 pracdb01:/dev/rdsk/c0t6d0 /dev/did/rdsk/d6
According to the storage information table in Appendix B, slice 4 of LUN2 can be
used for Solaris Volume Manager. To add it to the metaset, you would do the
following:
# metaset -s prac -a /dev/did/dsk/d1 /dev/did/dsk/d5 /dev/did/dsk/d6
You can add as many raw devices in shared storage as needed, and using the
metainit command, you can create metadevices with the raw devices in the
metaset.
3. Verify the metaset.
To confirm that the metaset was correctly created and that it has all the raw
devices you gave it, you can issue the metaset command. Here is the output for
the pracdb setup:
root@pracdb01 # metaset -s prac
Set name = prac, Set number = 1
Host Owner
pracdb01
pracdb02
pracdb03
Driv Dbase
d1 Yes
d5 Yes
d6 Yes
The Solaris Volume Manager metaset is automatically registered with the Sun
Cluster software.
See Sun Cluster Data Service for Oracle RAC Guide for Solaris OS for details.
# scstat -D
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 17 of 62
-- Device Group Servers --
Device Group Primary Secondary
------------------ ---------- -------------
-- Device Group Status --
Device Group Status
------------------ --------
-- Multi-owner Device Groups --
Device Group Online Status
------------------- -------------
Multi-owner device group: prac pracdb01,pracdb02
7.10 Configuring QFS
QFS is a generic cluster file system that can hold any kind of files. You can install the
Oracle home in it if you like, use it only for database files, or store any other kind of
information in it. To configure QFS, follow these steps:
1. Create on all nodes, a directory where you can mount the cluster file system.
# mkdir -p /s1/qfs/oracle
2. Ensure the directory is mounted after a reboot by adding these lines to /etc/vfstab on all
nodes.
# RAC on perf shared QFS
#
Data - /s1/qfs/oracle samfs - no shared,notrace
3. Create /etc/opt/SUNWsamfs/mcf on all nodes.
Here is a brief explanation of what the entries in this file mean:
-ma: Represents the cluster file system name. You need only one line with ma
and the name (Data in this example).
-mm: Represents the device or devices for storing QFS metadata. You need at
least one but you should have more for redundancy. Use the DID provided by the
Sun Cluster software.
-mr: Represents devices where you want QFS to store data. You should give QFS
all the devices you want it to use for data storage (look at the storage information
table to identify the devices for QFS). QFS puts all these devices in a pool and
stripes them to create the cluster file system on top. Here is an example of the mcf
file for the pracdb setup:
# hostname
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 18 of 62
pracdb01
# more /etc/opt/SUNWsamfs/mcf
Data 2 ma Data on shared
/dev/did/dsk/d6s0 20 mm Data on
/dev/did/dsk/d6s1 21 mr Data on
/dev/did/dsk/d1s0 22 mr Data on
# hostname
pracdb02
# more /etc/opt/SUNWsamfs/mcf
Data 2 ma Data on shared
/dev/did/dsk/d6s0 20 mm Data on
/dev/did/dsk/d6s1 21 mr Data on
/dev/did/dsk/d1s0 22 mr Data on
4. Create /etc/opt/SUNWsamfs/hosts.Data on all nodes.
In this file, you define which node is the QFS server and which nodes are backups
(all the rest). Usually Node1 is defined as the manager and the rest as backups.
# hostname
pracdb01
# more /etc/opt/SUNWsamfs/hosts.Data
root@pracdb01 # more /etc/opt/SUNWsamfs/hosts.Data
#Host file for family set "Data"
pracdb01 clusternode1-priv 1 0 server
pracdb02 clusternode2-priv 2 0
# hostname
pracdb02
# more /etc/opt/SUNWsamfs/hosts.Data
root@pracdb01 # more /etc/opt/SUNWsamfs/hosts.Data
#Host file for family set "Data"
pracdb01 clusternode1-priv 1 0 server
pracdb02 clusternode2-priv 2 0
5. Create /etc/opt/SUNWsamfs/samfs.cmd on all nodes.
Here is the file used in the pracdb setup. Copy this file onto all your nodes.
# hostname
pracdb02
# more /etc/opt/SUNWsamfs/samfs.cmd
stripe=1
sync_meta=1
mh_write
qwrite
forcedirectio
nstreams=1024
notrace
rdlease=300
wrlease=300
aplease=300
6. Create the file system on the node you defined as server in Step 4 (pracdb01 in this
setup).
# /opt/SUNWsamfs/sbin/sammkfs -S -a 64 Data
# mount /s1/qfs/oracle
# chown oracle:dba /s1/qfs/oracle
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 19 of 62
7. Mount the file system on all other nodes:
# mount /s1/qfs/oracle
8. In all nodes, check the file system:
# hostname
pracdb02
root@pracdb02 # df -k Data
Filesystem kbytes used avail capacity Mounted on
Data 69081600 36145088 32936512 53% /s1/qfs/oracle
9. Create the QFS metadata server (MDS) resource group for high availability. From
pracdb01 do:
# clresourcetype register SUNW.qfs
# clresourcegroup create -p nodelist=pracdb01,pracdb02 qfs-rg
# clresource create -t SUNW.qfs -g qfs-rg -p QFSFileSystem=/s1/qfs/oracle
# clresourcegroup online -emM qfs-rg
At this point the cluster resource group configuration looks like this:
# /usr/cluster/bin/scstat -g
-- Resource Groups and Resources --
Group Name Resources
---------- ---------
Resources: qfs-rg qfs-res
Resources: prac-fmwk-rg prac-fmwk-rs prac-udlm-rs
-- Resource Groups --
Group Name Node Name State Suspended
---------- --------- ----- ---------
Group: qfs-rg pracdb01 Online No
Group: qfs-rg pracdb02 Offline No
Group: prac-fmwk-rg pracdb01 Online No
Group: prac-fmwk-rg pracdb02 Online No
-- Resources --
Resource Name Node Name State Status Message
------------- --------- ----- --------------
Resource: qfs-res pracdb01 Online Online - Service is online.
Resource: qfs-res pracdb02 Offline Offline
Resource: prac-fmwk-rs pracdb01 Online Online
Resource: prac-fmwk-rs pracdb02 Online Online
Resource: prac-udlm-rs pracdb01 Online Online
Resource: prac-udlm-rs pracdb02 Online Online -- Resource Groups and Resources --
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 20 of 62
7.11 UNIX Pre-installation Steps
After configuring the raw volumes, perform the following steps prior to installation as
root user:
The Machines were Soalris 10 SPARC 64 bit (M5000 domain).
The Shared storage was SAN 9990.
The group oinstall and user oracle were created on both nodes
Few parameters were tuned in /etc/rc2.d/S99nettune
bash-3.00# more /etc/rc2.d/S99nettune
#!/bin/sh
ndd -set /dev/ip ip_forward_src_routed 0
ndd -set /dev/ip ip_forwarding 0
ndd -set /dev/tcp tcp_conn_req_max_q 16384
ndd -set /dev/tcp tcp_conn_req_max_q0 16384
ndd -set /dev/tcp tcp_xmit_hiwat 400000
ndd -set /dev/tcp tcp_recv_hiwat 400000
ndd -set /dev/tcp tcp_cwnd_max 2097152
ndd -set /dev/tcp tcp_ip_abort_interval 60000
ndd -set /dev/tcp tcp_rexmit_interval_initial 4000
ndd -set /dev/tcp tcp_rexmit_interval_max 10000
ndd -set /dev/tcp tcp_rexmit_interval_min 3000
ndd -set /dev/tcp tcp_max_buf 4194304
ndd -set /dev/tcp tcp_maxpsz_multiplier 10
#Oracle Required
ndd -set /dev/udp udp_recv_hiwat 65535
ndd -set /dev/udp udp_xmit_hiwat 65535
Check /etc/system is readable by oracle (else RDBMS installation will fail)
$ ls -tlr /etc/system
-rw-r--r-- 1 root root 3110 Sep 10 12:09 /etc/system
Checked the system config on both nodes
For memory
$ /usr/sbin/prtconf |grep "Memory size"
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 21 of 62
Memory size: 32768 Megabytes
/ For SWAP
$ /usr/sbin/swap -s
total: 5067632k bytes allocated + 483104k reserved = 5550736k used, 37220560k
available
For /tmp
df -h /tmp
Filesystem size used avail capacity Mounted on
swap 35G 128K 35G 1% /tmp
For OS
/bin/isainfo -kv
64-bit sparcv9 kernel modules
For user
id -a #both UID and GID of user oracle should be same on both nodes
uid=100(oracle) gid=102(oinstall) groups=102(oinstall),101(dba),103(oper)
User nobody should exist
id -a nobody
uid=60001(nobody) gid=60001(nobody) groups=60001(nobody)
Update /etc/hosts entries on both nodes
$ cat /etc/hosts
# Internet host table
::1 localhost
127.0.0.1 localhost
# Public IPs
10.1.18.97 pracdb01 tatasky.com loghost
10.1.18.98 pracdb02
10.1.18.218 pracdb03
10.1.18.108 rac-lh
#IPMP Test IPS
10.1.18.101 pracdb01-bge0-test
10.1.18.102 pracdb01-bge1-test
10.1.18.106 pracdb02-bge0-test
10.1.18.107 pracdb02-bge1-test
10.1.18.219 pracdb03-ce4-test
10.1.18.220 pracdb03-ce5-test
#Virtual IPs
10.1.18.99 pracdb01-vip
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 22 of 62
10.1.18.100 pracdb02-vip
Checked for SSH and SCP in /usr/local/bin/
The cluster verification utility(runcluvfy.sh ) checks for scp and ssh in /usr/local/bin/.
Create soft links of ssh and scp in /usr/local/bin/ if they are not there.
cd /usr/local/bin/
ls -l
lrwxrwxrwx 1 root root 12 Jul 15 22:16 /usr/local/bin/scp -> /usr/bin/scp
lrwxrwxrwx 1 root root 12 Jul 14 18:02 /usr/local/bin/ssh -> /usr/bin/ssh
Create RSA and DSA keys on each node:
Complete the following steps on each node:
1. Log in as the oracle user.
2. If necessary, create the .ssh directory in the oracle user’s home directory and
set the correct permissions on it:
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ chmod 700
3. Enter the following commands to generate an RSA key for version 2 of the SSH
protocol:
$ /usr/bin/ssh-keygen -t rsa
At the prompts:
Accept the default location for the key file.
Enter and confirm a pass phrase that is different from the oracle user’s
password.
This command writes the public key to the ~/.ssh/id_rsa.pub file and the
private key to the ~/.ssh/id_rsa file. Never distribute the private key to anyone.
4. Enter the following commands to generate a DSA key for version 2 of the SSH
protocol:
$ /usr/bin/ssh-keygen -t dsa
At the prompts:
Accept the default location for the key file
Enter and confirm a pass phrase that is different from the oracle user’s
password
This command writes the public key to the ~/.ssh/id_dsa.pub file and the
private key to the ~/.ssh/id_dsa file. Never distribute the private key to
anyone.
Add keys to an authorized key file: Complete the following steps:
1. On the local node, determine if you have an authorized key file
(~/.ssh/authorized_keys). If the authorized key file already exists, then
proceed to step 2. Otherwise, enter the following commands:
$ touch ~/.ssh/authorized_keys
$ cd ~/.ssh
$ ls
You should see the id_dsa.pub and id_rsa.pub keys that you have created.
2. Using SSH, copy the contents of the ~/.ssh/id_rsa.pub and
~/.ssh/id_dsa.pub files to the file ~/.ssh/authorized_keys, and provide
the Oracle user password as prompted. This process is illustrated in the following
syntax example with a two-node cluster, with nodes pracdb01 and pracdb02, where the
Oracle user path is /oracleRac/oracle:
[oracle@pracdb01 .ssh]$ ssh pracdb01 cat /oracleRac/oracle/.ssh/id_rsa.pub >>
authorized_keys
oracle@pracdb01 password:
[oracle@pracdb01 .ssh]$ ssh pracdb01 cat /oracleRac/oracle/.ssh/id_dsa.pub >>
authorized_keys
[oracle@pracdb01 .ssh$ ssh pracdb02 cat /oracleRac/oracle/.ssh/id_rsa.pub >>
authorized_keys
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 23 of 62
oracle@pracdb02’s password:
[oracle@pracdb01 .ssh$ ssh pracdb02 cat /oracleRac/oracle/.ssh/id_dsa.pub
>>authorized_keys
oracle@pracdb02’s password:
3. Use SCP (Secure Copy) or SFTP (Secure FTP) to copy the authorized_keys file
to the Oracle user .ssh directory on a remote node. The following example is with
SCP, on a node called pracdb02, where the Oracle user path is /oracleRac/oracle:
[oracle@pracdb01 .ssh]scp authorized_keys pracdb02:/oracleRac/oracle/.ssh/
Repeat step 2 and 3 for each cluster node member. When you have added keys
from each cluster node member to the authorized_keys file on the last node you
want to have as a cluster node member, then use SCP to copy the complete
authorized_keys file back to each cluster node member
5. Change the permissions on the Oracle user’s /.ssh/authorized_keys file on
all cluster nodes:
$ chmod 600 ~/.ssh/authorized_keys
At this point, if you use ssh to log in to or run a command on another node, you
are prompted for the pass phrase that you specified when you created the DSA
key.
Establish system environment variables
• As oracle account user, if you are prompted for a password, you have not
given the oracle account the same attributes on all nodes. You must correct
this because the Oracle Universal Installer cannot use the scp command to
copy Oracle products to the remote node's directories without user
equivalence.
• Set a local bin directory in the user's PATH, such as /usr/local/bin, or /opt/bin.
It is necessary to have execute permissions on this directory.
• Set the DISPLAY variable to point to the system's (from where you will run
OUI) IP address, or name, X server, and screen.
• Set a temporary directory path for TMPDIR with at least 20 Mb of free space
to which the OUI has write permission.
Establish Oracle environment variables: Set the following Oracle environment
variables:
Environment Variable Suggested value
ORACLE_BASE /u01/app
ORACLE_HOME /u01/app/oracle/oracle10
ORACLE_TERM Xterm
PATH Should contain $ORACLE_HOME/bin
CLASSPATH
$ORACLE_HOME/JRE:$ORACLE_HOME/jlib \
$ORACLE_HOME/rdbms/jlib: \
$ORACLE_HOME/network/jlib
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 24 of 62
• Create the directory /var/opt/oracle and set ownership to the oracle user.
• Verify the existence of the file /opt/SUNWcluster/bin/lkmgr. This is used by the
OUI to indicate that the installation is being performed on a cluster.
$ ./runcluvfy.sh stage -pre crsinst -n pracdb01,pracdb02 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "pracdb01"
Destination Node Reachable?
------------------------------------ ------------------------
pracdb01 yes
pracdb02 yes
Result: Node reachability check passed from node "pracdb01".
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
------------------------------------ ------------------------
pracdb02 passed
pracdb01 passed
Result: User equivalence check passed for user "oracle".
Checking administrative privileges...
Check: Existence of user "oracle"
Node Name User Exists Comment
------------ ------------------------ ------------------------
pracdb02 yes passed
pracdb01 yes passed
Result: User existence check passed for "oracle".
Check: Existence of group "oinstall"
Node Name Status Group ID
------------ ------------------------ ------------------------
pracdb02 exists 102
pracdb01 exists 102
Result: Group existence check passed for "oinstall".
Check: Membership of user "oracle" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Comment
---------------- ------------ ------------ ------------ ------------ ------------
pracdb02 yes yes yes yes passed
pracdb01 yes yes yes yes passed
Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed.
Administrative privileges check passed.
Checking node connectivity...
Interface information for node "pracdb02"
Interface Name IP Address Subnet
------------------------------ ------------------------------ ----------------
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 25 of 62
bge0 10.1.18.98 10.1.18.0
bge0 10.1.18.106 10.1.18.0
bge1 10.1.18.107 10.1.18.0
nxge0 192.168.1.7 192.168.1.0
nxge1 172.16.0.130 172.16.0.128
nxge2 172.16.1.2 172.16.1.0
clprivnet0 172.16.4.2 172.16.4.0
sppp0 192.168.1.4 192.168.1.0
Interface information for node "pracdb01"
Interface Name IP Address Subnet
------------------------------ ------------------------------ ----------------
bge0 10.1.18.97 10.1.18.0
bge0 10.1.18.101 10.1.18.0
bge1 10.1.18.102 10.1.18.0
nxge0 192.168.1.8 192.168.1.0
nxge1 172.16.0.129 172.16.0.128
nxge2 172.16.1.1 172.16.1.0
clprivnet0 172.16.4.1 172.16.4.0
sppp0 192.168.1.3 192.168.1.0
Check: Node connectivity of subnet "10.1.18.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
pracdb02:bge0 pracdb02:bge0 yes
pracdb02:bge0 pracdb02:bge1 yes
pracdb02:bge0 pracdb01:bge0 yes
pracdb02:bge0 pracdb01:bge0 yes
pracdb02:bge0 pracdb01:bge1 yes
pracdb02:bge0 pracdb02:bge1 yes
pracdb02:bge0 pracdb01:bge0 yes
pracdb02:bge0 pracdb01:bge0 yes
pracdb02:bge0 pracdb01:bge1 yes
pracdb02:bge1 pracdb01:bge0 yes
pracdb02:bge1 pracdb01:bge0 yes
pracdb02:bge1 pracdb01:bge1 yes
pracdb01:bge0 pracdb01:bge0 yes
pracdb01:bge0 pracdb01:bge1 yes
pracdb01:bge0 pracdb01:bge1 yes
Result: Node connectivity check passed for subnet "10.1.18.0" with node(s) pracdb02,pracdb01.
Check: Node connectivity of subnet "192.168.1.0"
WARNING:
Make sure IP address "192.168.1.3" is up and is a valid IP address on node "pracdb01".
Source Destination Connected?
------------------------------ ------------------------------ ----------------
pracdb02:nxge0 pracdb02:sppp0 yes
pracdb02:nxge0 pracdb01:nxge0 yes
pracdb02:nxge0 pracdb01:sppp0 no
pracdb02:sppp0 pracdb01:nxge0 yes
pracdb02:sppp0 pracdb01:sppp0 no
pracdb01:nxge0 pracdb01:sppp0 no
Result: Node connectivity check failed for subnet "192.168.1.0".
Check: Node connectivity of subnet "172.16.0.128"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
pracdb02:nxge1 pracdb01:nxge1 yes
Result: Node connectivity check passed for subnet "172.16.0.128" with node(s) pracdb02,pracdb01.
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 26 of 62
Check: Node connectivity of subnet "172.16.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
pracdb02:nxge2 pracdb01:nxge2 yes
Result: Node connectivity check passed for subnet "172.16.1.0" with node(s) pracdb02,pracdb01.
Check: Node connectivity of subnet "172.16.4.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
pracdb02:clprivnet0 pracdb01:clprivnet0 yes
Result: Node connectivity check passed for subnet "172.16.4.0" with node(s) pracdb02,pracdb01.
Suitable interfaces for the private interconnect on subnet "10.1.18.0":
pracdb02 bge0:10.1.18.98 bge0:10.1.18.106
pracdb01 bge0:10.1.18.97 bge0:10.1.18.101
Suitable interfaces for the private interconnect on subnet "10.1.18.0":
pracdb02 bge1:10.1.18.107
pracdb01 bge1:10.1.18.102
Suitable interfaces for the private interconnect on subnet "172.16.0.128":
pracdb02 nxge1:172.16.0.130
pracdb01 nxge1:172.16.0.129
Suitable interfaces for the private interconnect on subnet "172.16.1.0":
pracdb02 nxge2:172.16.1.2
pracdb01 nxge2:172.16.1.1
Suitable interfaces for the private interconnect on subnet "172.16.4.0":
pracdb02 clprivnet0:172.16.4.2
pracdb01 clprivnet0:172.16.4.1
ERROR:
Could not find a suitable set of interfaces for VIPs.
Result: Node connectivity check failed.
Checking system requirements for 'crs'...
Check: Total memory
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
pracdb02 32GB (33554432KB) 512MB (524288KB) passed
pracdb01 32GB (33554432KB) 512MB (524288KB) passed
Result: Total memory check passed.
Check: Free disk space in "/tmp" dir
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
pracdb02 40.06GB (42007624KB) 400MB (409600KB) passed
pracdb01 40.41GB (42370952KB) 400MB (409600KB) passed
Result: Free disk space check passed.
Check: Swap space
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
pracdb02 2.65GB (2782840KB) 512MB (524288KB) passed
pracdb01 2.65GB (2782840KB) 512MB (524288KB) passed
Result: Swap space check passed.
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 27 of 62
Check: System architecture
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
pracdb02 64-bit 64-bit passed
pracdb01 64-bit 64-bit passed
Result: System architecture check passed.
Check: Operating system version
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
pracdb02 SunOS 5.10 SunOS 5.10 passed
pracdb01 SunOS 5.10 SunOS 5.10 passed
Result: Operating system version check passed.
Check: Package existence for "SUNWarc"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWarc:11.10.0 passed
pracdb01 SUNWarc:11.10.0 passed
Result: Package existence check passed for "SUNWarc".
Check: Package existence for "SUNWbtool"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWbtool:11.10.0 passed
pracdb01 SUNWbtool:11.10.0 passed
Result: Package existence check passed for "SUNWbtool".
Check: Package existence for "SUNWhea"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWhea:11.10.0 passed
pracdb01 SUNWhea:11.10.0 passed
Result: Package existence check passed for "SUNWhea".
Check: Package existence for "SUNWlibm"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWlibm:5.10 passed
pracdb01 SUNWlibm:5.10 passed
Result: Package existence check passed for "SUNWlibm".
Check: Package existence for "SUNWlibms"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWlibms:5.10 passed
pracdb01 SUNWlibms:5.10 passed
Result: Package existence check passed for "SUNWlibms".
Check: Package existence for "SUNWsprot"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWsprot:5.10 passed
pracdb01 SUNWsprot:5.10 passed
Result: Package existence check passed for "SUNWsprot".
Check: Package existence for "SUNWsprox"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 ERROR: information for "SUNWsprox" was not found passed
pracdb01 ERROR: information for "SUNWsprox" was not found passed
Result: Package existence check passed for "SUNWsprox".
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 28 of 62
Check: Package existence for "SUNWtoo"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWtoo:11.10.0 passed
pracdb01 SUNWtoo:11.10.0 passed
Result: Package existence check passed for "SUNWtoo".
Check: Package existence for "SUNWi1of"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWi1of:6.6.2.7400 passed
pracdb01 SUNWi1of:6.6.2.7400 passed
Result: Package existence check passed for "SUNWi1of".
Check: Package existence for "SUNWi1cs"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWi1cs:2.0 passed
pracdb01 SUNWi1cs:2.0 passed
Result: Package existence check passed for "SUNWi1cs".
Check: Package existence for "SUNWi15cs"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWi15cs:2.0 passed
pracdb01 SUNWi15cs:2.0 passed
Result: Package existence check passed for "SUNWi15cs".
Check: Package existence for "SUNWxwfnt"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWxwfnt:6.6.2.7400 passed
pracdb01 SUNWxwfnt:6.6.2.7400 passed
Result: Package existence check passed for "SUNWxwfnt".
Check: Package existence for "SUNWlibC"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWlibC:5.10 passed
pracdb01 SUNWlibC:5.10 passed
Result: Package existence check passed for "SUNWlibC".
Check: Package existence for "SUNWscucm:3.1"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWscucm:3.2.0 passed
pracdb01 SUNWscucm:3.2.0 passed
Result: Package existence check passed for "SUNWscucm:3.1".
Check: Package existence for "SUNWudlmr:3.1"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWudlmr:3.2.0 passed
pracdb01 SUNWudlmr:3.2.0 passed
Result: Package existence check passed for "SUNWudlmr:3.1".
Check: Package existence for "SUNWudlm:3.1"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWudlm:3.2.0 passed
pracdb01 SUNWudlm:3.2.0 passed
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 29 of 62
Result: Package existence check passed for "SUNWudlm:3.1".
Check: Package existence for "ORCLudlm:Dev_Release_06/11/04,_64bit_3.3.4.8_reentrant"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 ORCLudlm:Dev_Release_06/11/04,_64bit_3.3.4.8_reentrant,_async_libskgxn2.so
passed
pracdb01 ORCLudlm:Dev_Release_06/11/04,_64bit_3.3.4.8_reentrant,_async_libskgxn2.so
passed
Result: Package existence check passed for "ORCLudlm:Dev_Release_06/11/04,_64bit_3.3.4.8_reentrant".
Check: Package existence for "SUNWscr:3.1"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWscr:3.2.0 passed
pracdb01 SUNWscr:3.2.0 passed
Result: Package existence check passed for "SUNWscr:3.1".
Check: Package existence for "SUNWscu:3.1"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
pracdb02 SUNWscu:3.2.0 passed
pracdb01 SUNWscu:3.2.0 passed
Result: Package existence check passed for "SUNWscu:3.1".
Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
pracdb02 exists passed
pracdb01 exists passed
Result: Group existence check passed for "dba".
Check: Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
pracdb02 exists passed
pracdb01 exists passed
Result: Group existence check passed for "oinstall".
Check: User existence for "oracle"
Node Name Status Comment
------------ ------------------------ ------------------------
pracdb02 exists passed
pracdb01 exists passed
Result: User existence check passed for "oracle".
Check: User existence for "nobody"
Node Name Status Comment
------------ ------------------------ ------------------------
pracdb02 exists passed
pracdb01 exists passed
Result: User existence check passed for "nobody".
System requirement passed for 'crs'
Pre-check for cluster services setup was unsuccessful on all the nodes.
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 30 of 62
7.12 Using the Oracle Universal Installer for Real Application Clusters
Follow these procedures to use the Oracle Universal Installer to install the Oracle
Enterprise Edition and the Real Application Clusters software.
To install the Oracle Software, perform the following:.
10. Configure OCR and voting in QFS.
If you plan to place the Oracle OCR and voting devices inside the QFS file
system, follow this example on one of the nodes:
# mkdir /s1/qfs/oracle/crs
root@pracdb01 # touch ocr_disk1 ocr_disk2 vote_disk1 vote_disk2 vote_disk3
root@pracdb01 # ls -tlr
total 0
-rw-r--r-- 1 root root 0 Aug 11 15:49 ocr_disk1
-rw-r--r-- 1 root root 0 Aug 11 15:49 ocr_disk2
-rw-r--r-- 1 root root 0 Aug 11 15:49 vote_disk1
-rw-r--r-- 1 root root 0 Aug 11 15:49 vote_disk2
-rw-r--r-- 1 root root 0 Aug 11 15:49 vote_disk3
root@pracdb01 # chown root:oinstall ocr_disk1 ocr_disk2
root@pracdb01 # chown oracle:oinstall vote_disk1 vote_disk2 vote_disk3
root@pracdb01 # chmod 660 *
root@pracdb01 # ls -tlr
total 0
-rw-rw---- 1 root oinstall 0 Aug 11 15:49 ocr_disk1
-rw-rw---- 1 root oinstall 0 Aug 11 15:49 ocr_disk2
-rw-rw---- 1 oracle oinstall 0 Aug 11 15:49 vote_disk1
-rw-rw---- 1 oracle oinstall 0 Aug 11 15:49 vote_disk2
-rw-rw---- 1 oracle oinstall 0 Aug 11 15:49 vote_disk3
root@pracdb01 # dd if=/dev/zero of=/s1/qfs/oracle/crs/ocr_disk1 bs=268435456 count=1
1+0 records in
1+0 records out
root@pracdb01 # dd if=/dev/zero of=/s1/qfs/oracle/crs/ocr_disk2 bs=268435456 count=1
1+0 records in
1+0 records out
root@pracdb01 # dd if=/dev/zero of=/s1/qfs/oracle/crs/vote_disk1 bs=268435456 count=1
1+0 records in
1+0 records out
root@pracdb01 # dd if=/dev/zero of=/s1/qfs/oracle/crs/vote_disk2 bs=268435456 count=1
1+0 records in
1+0 records out
root@pracdb01 # dd if=/dev/zero of=/s1/qfs/oracle/crs/vote_disk3 bs=268435456 count=1
1+0 records in
1+0 records out
8 Preparing for Oracle RAC Installation
This section describes how to prepare a system for Oracle RAC installation with Oracle
Clusterware (formerly called CRS for "Cluster Ready Services").Before proceeding with
the installation steps make sure that the following hardware requirements are satisfied.
Unless otherwise stated, all the commands described here must be executed as root.
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 31 of 62
1. Clean up OCR and voting devices.
If you are not placing the OCR and voting devices in the QFS file system, do the
following in the raw devices that will be used for this purpose:
# dd if=/dev/zero of=ocr_dev bs=1024k count=120
# dd if=/dev/zero of=voting-dev bs=1024k count=120
# chown -R oracle:dba voting-dev
# chown -R root:dba ocr-dev
# chmod -R 660 voting-dev
# chmod -R 640 ocr-dev
2. Allow Oracle users to use ssh on all systems.
2. Confirm that you can use ssh as Oracle.
For all nodes, execute the following command as Oracle user against all other
nodes and confirm that there are no problems reported:
# ssh pracdb02
9 Installing Oracle Clusterware (CRS)
1. Obtain the Oracle Clusterware tarball (for SPARC systems) and place it in
/oracleRac/software/cluster.
2. Run the following command.
# su – oracle
3. Change to the /oracleRac/software/cluster directory.
4. Set the display to the system where you want to display the installer GUI.
5. Run the installer.
# ./runInstaller
6. Accept the path for the inventory directory and the group (oinstall), and click Next.
7. Set the name to crs and the path to /u01/app/oracle/product/crs , and click Next.
8. See if there are any failures, and then click Next.
9. Verify that public, virtual, and private names resolve in /etc/hosts, NIS, or Sun Cluster
database, and then click Next.
10. Set the public IP/device to public. Set the private IP/device to private. Select
clprivnet0 device for private and leave the ones used by the Sun Cluster software as "do
not use."
11. Set OCR to the device selected in the storage table using the DID identifier (or the
QFS file). Use external redundancy if possible.
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 32 of 62
12. Set voting to the device selected in the storage table using the DID identifier (or the
QFS file). Use external redundancy of possible.
13. Click Install.
14. Run scripts as root in the order and on the nodes indicated.
15. If VIP fails while running scripts on the last node (known bug), do the following:
a) Set the DISPLAY variable appropriately.
b) Execute # /u01/app/oracle/product/crs/bin/vipca.
c) Fill in again the VIP information, and vipca creates and starts the VIP, GSD
(global services daemon), and ONS (Oracle name server) resources under Oracle
Clusterware.
10 Installing the Oracle RDBMS Server
1. Obtain the Oracle database 10.2.0.1.
2. Create the directory /oracleRac/software on the database server and place the database
installation software there.
3. Change to the /oracleRac/software directory and unpackage the tarball.
4. Run the following command:
# su – oracle
5. Change to the /oracleRac/software/db directory.
6. Set the display to the system where you want to display the installer GUI.
7. Run the installer with the following command:
# ./runInstaller
8. Select Enterprise Edition, and click Next.
9. Define Oracle home as /u01/app/oracle/oracle10. Clear the check box for "create
starter database," and click Next.
10. Select all nodes, and click Next.
11. Leave the inventory in /var/opt/oracle/oraInventory and leave the operating system
group name as dba. Click Next.
12. Disregard the failure for nonexec_user_stack=1. If there are no other warnings, click
Next and click Yes in the warning popup dialog.
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 33 of 62
13. Select the option "install database software only" and click Next.
14. Click Install and wait until the installation finishes.
15. As root, execute the two commands presented by the installer, and click OK.
16. Exit the installer.
17. Now it is necessary to install the patchset 2. Obtain the tarball for this patchset.. For
SPARC systems, get Oracle 10.2.0.4 Patchset.
18. Create the directory /oracleRac/oracle/patch and place the tarball in it.
19. Unpackage the tarball.
20. Run the following command:
# su – oracle
21. Set the display to the system where you want to display the installer GUI.
22. Change to the /oracleRac/software/patch_10.2.0.4.sunsolaris_sparc_64bit/Disk1
directory.
23. Install the patch on CRS home first, and then install it again on database home.
24. Execute the following command.
# ./runInstaller
25. Click Next.
26. Select CRS or db for the database (first install on Oracle Clusterware and then on
database).
27. Click Install.
28. Repeat installation of patch 10.2.0.4 on database home.
29. As Oracle user, add the following entries to the file /oracleRac/oracle/.profile.
export ORACLE_BASE=/u01/app
export ORACLE_HOME=$ORACLE_BASE/oracle/oracle10
export OH=$ORACLE_HOME
export ORA_CRS_HOME=$ORACLE_BASE/oracle/product/crs
export CH=$ORA_CRS_HOME
export ORACLE_SID=prac1
#export ORACLE_SID=ORCL
export NLS_LANG=AMERICAN_AMERICA.UTF8
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 34 of 62
export
PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/sbin:/usr/bin:/usr/ccs/bin:/usr/ucb:/etc:/usr/X/bin:/
usr/openwin/bin:/usr/local/bin:/usr/sbin
CLASSPATH=$CLASSPATH:$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/
jlib:$ORACLE_HOME/network/jlib
export CLASSPATH
EDITOR=/usr/bin/vi
set -o vi
#stty istrip
#stty erase ^H
umask 022
export umask
ulimit -n 8192 >/dev/null 2>&1
ulimit -s 32768 >/dev/null 2>&1
#PS1=`hostname`' PWD[$ORACLE_SID]$ '
ULIMIT=999999999999999
export ULIMIT
HISTSIZE=1000
export HISTSIZE
HISTFILE=/var/userlog/`who am i|cut -f1 -d' '`.log
export HISTFILE
alias h=history
setenv ORACLE_BASE $HOME
setenv ORACLE_HOME $ORACLE_BASE/db
setenv CRS_HOME $ORACLE_BASE/crs (if RAC present)
setenv PATH $ORACLE_HOME/bin:$PATH
setenv LD_LIBRARY_PATH $ORACLE_HOME/lib
setenv ORACLE_SID SID_for_your_database
30. (SPARC only) Install patch 5117016 after patch 10.2.0.2 on Oracle home only
(mandatory).
$ ls -tlr
total 53044
drwxr-xr-x 4 oracle oinstall 512 Feb 14 2009 5259835
drwxrwxr-x 5 oracle oinstall 512 Jun 17 12:41 8576156
-rw-r--r-- 1 oracle oinstall 136 Jul 14 15:08 local.cshrc
-rw-r--r-- 1 oracle oinstall 157 Jul 14 15:08 local.login
-rw-r--r-- 1 oracle oinstall 174 Jul 14 15:08 local.profile
drwxr-xr-x 2 oracle oinstall 512 Jul 16 10:49 null
-rw-r--r-- 1 oracle oinstall 235632 Aug 10 15:07 p5259835_10204_Solaris-64.zip
-rw-r--r-- 1 oracle oinstall 22344601 Aug 10 16:16 p8576156_10204_Solaris-64.zip
11 Creating a Database
Using DBCA, create a database. You can use raw devices, Solaris Volume Manager
metadevices, ASM, or the QFS cluster file system to store the database files. If you
decide to use ASM, configure it by providing the Sun Cluster DID instead of the
/dev/rdsk/* path, since that path is not always constant across nodes. The Sun Cluster
DID path is the same on all nodes.
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 35 of 62
12 Adding 3rd node to RAC
1. Install OS and recommended patches
2. Configure ipmp and connect private cables
3. update /etc/hosts files on all the nodes
4. Execute EIS-profile from SUN DVD
5. Update the files in new node by referring suncluster3.2 EIS checklist
6. Install Sun cluster 3.2 software
5. Enable node addition from node one
6. From new node run scinstall and add the node to existing cluster
7. Install patches SUNWi1cs SUNWi15cs
8. Install QFS packges by referring QFS EIS checklist
9. Install UDLM packages
10.copy config file from the server pracdb01 to locations /etc/opt/SUNWsamfs to new
server and verify files size and file permission is same in all the server
11. Configuring SSH on Cluster Member Nodes
Complete the following steps:
Create RSA and DSA keys on each node: Complete the following steps on each
node:
1. Log in as the oracle user.
2. If necessary, create the .ssh directory in the oracle user’s home directory and
set the correct permissions on it:
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ chmod 700
3. Enter the following commands to generate an RSA key for version 2 of the SSH
protocol:
$ /usr/bin/ssh-keygen -t rsa
At the prompts:
Accept the default location for the key file.
Enter and confirm a pass phrase that is different from the oracle user’s
password.
This command writes the public key to the ~/.ssh/id_rsa.pub file and the
private key to the ~/.ssh/id_rsa file. Never distribute the private key to anyone.
4. Enter the following commands to generate a DSA key for version 2 of the SSH
protocol:
$ /usr/bin/ssh-keygen -t dsa
At the prompts:
Accept the default location for the key file
Enter and confirm a pass phrase that is different from the oracle user’s
password
This command writes the public key to the ~/.ssh/id_dsa.pub file and the
private key to the ~/.ssh/id_dsa file. Never distribute the private key to
anyone.
Add keys to an authorized key file: Complete the following steps:
1. On the local node, determine if you have an authorized key file
(~/.ssh/authorized_keys). If the authorized key file already exists, then
proceed to step 2. Otherwise, enter the following commands:
$ touch ~/.ssh/authorized_keys
$ cd ~/.ssh
$ ls
You should see the id_dsa.pub and id_rsa.pub keys that you have created.
2. Using SSH, copy the contents of the ~/.ssh/id_rsa.pub and
~/.ssh/id_dsa.pub files to the file ~/.ssh/authorized_keys, and provide
the Oracle user password as prompted. This process is illustrated in the following
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 36 of 62
syntax example with a two-node cluster, with nodes pracdb01 and pracdb02, where the
Oracle user path is /oracleRac/oracle:
[oracle@pracdb01 .ssh]$ ssh pracdb01 cat /oracleRac/oracle/.ssh/id_rsa.pub >>
authorized_keys
oracle@pracdb01’s password:
[oracle@pracdb01 .ssh]$ ssh pracdb01 cat /oracleRac/oracle/.ssh/id_dsa.pub >>
authorized_keys
[oracle@pracdb01 .ssh$ ssh pracdb02 cat /oracleRac/oracle/.ssh/id_rsa.pub >>
authorized_keys
oracle@pracdb02’s password:
[oracle@pracdb01 .ssh$ ssh pracdb02 cat /oracleRac/oracle/.ssh/id_dsa.pub
>>authorized_keys
oracle@pracdb02’s password:
3. Use SCP (Secure Copy) or SFTP (Secure FTP) to copy the authorized_keys file
to the Oracle user .ssh directory on a remote node. The following example is with
SCP, on a node called pracdb02, where the Oracle user path is /oracleRac/oracle:
[oracle@pracdb01 .ssh]scp authorized_keys pracdb02:/oracleRac/oracle/.ssh/
Repeat step 2 and 3 for each cluster node member. When you have added keys
from each cluster node member to the authorized_keys file on the last node you
want to have as a cluster node member, then use SCP to copy the complete
authorized_keys file back to each cluster node member
5. Change the permissions on the Oracle user’s /.ssh/authorized_keys file on
all cluster nodes:
$ chmod 600 ~/.ssh/authorized_keys
At this point, if you use ssh to log in to or run a command on another node, you
are prompted for the pass phrase that you specified when you created the DSA
key.
Enabling SSH User Equivalency on Cluster Member Nodes
To enable Oracle Universal Installer to use the ssh and scp commands without being
prompted for a pass phrase, follow these steps:
1. On the system where you want to run Oracle Universal Installer, log in as the
oracle user.
2. Enter the following commands:
$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
3. At the prompts, enter the pass phrase for each key that you generated.
If you have configured SSH correctly, then you can now use the ssh or scp
commands without being prompted for a password or a pass phrase.
4. If you are on a remote terminal, and the local node has only one visual (which is
typical), then use the following syntax to set the DISPLAY environment variable:
Bourne, Korn, and Bash shells
$ export DISPLAY=hostname:0
C shell:
$ setenv DISPLAY 0
For example, if you are using the Bash shell, and if your hostname is pracdb01, then
enter the following command:
$ export DISPLAY=pracdb01:0
5. To test the SSH configuration, enter the following commands from the same
terminal session, testing the configuration of each cluster node, where
nodename1, nodename2, and so on, are the names of nodes in the cluster:
$ ssh pracdb01 date
Tue Sep 15 11:20:56 IST 2009
$ ssh pracdb02 date
Tue Sep 15 11:21:01 IST 2009
Note: The Oracle user’s /.ssh/authorized_keys file on every
node must contain the contents from all of the /.ssh/id_rsa.pub
and /.ssh/id_dsa.pub files that you generated on all cluster
nodes.
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 37 of 62
Creating Required Operating System Groups and User
These commands should display the date set on each node.
If any node prompts for a password or pass phrase, then verify that the
~/.ssh/authorized_keys file on that node contains the correct public keys.
If you are using a remote client to connect to the local node, and you see a message
similar to "Warning: No xauth data; using fake authentication data for X11
forwarding," then this means that your authorized keys file is configured correctly,
but your ssh configuration has X11 forwarding enabled. To correct this, proceed to
step 6.
6. To ensure that X11 forwarding will not cause the installation to fail, create a
user-level SSH client configuration file for the Oracle software owner user, as
follows:
a. Using any text editor, edit or create the ~oracle/.ssh/config file.
b. Make sure that the ForwardX11 attribute is set to no. For example:
Host *
ForwardX11 no
7. You must run Oracle Universal Installer from this session or remember to repeat
steps 2 and 3 before you start Oracle Universal Installer from a different terminal
session.
12. Test the new node by executing the scripts ./runcluvfy.sh stage -pre crsinst -n
pracdb01, pracdb02, pracdb03 -verbose. If this tests passes then the server is ready for
Installing CRS and oracle software’s.
13. Now run scsetup and follow the instruction to add the nodes
>>> Sponsoring Node <<< For any machine to join a cluster, it must identify a node in that cluster willing to "sponsor" its membership in the cluster. When configuring a new cluster, this "sponsor" node is typically the first node used to build the new cluster. However, if the cluster is already established, the "sponsoring" node can be any node in that cluster. Already established clusters can keep a list of hosts which are able to configure themselves as new cluster members. This machine should be in the join list of any cluster which it tries to join. If the list does not include this machine, you may need to add it by using claccess(1CL) or other tools. And, if the target cluster uses DES to authenticate new machines attempting to configure themselves as new cluster members, the necessary encryption keys must be configured before any attempt to join. What is the name of the sponsoring node [pracdb02]? pracdb01 >>> Cluster Name <<< Each cluster has a name assigned to it. When adding a node to the cluster, you must identify the name of the cluster you are attempting to join. A sanity check is performed to verify that the "sponsoring" node is a member of that cluster. Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure Page 38 of 62 What is the name of the cluster you want to join [praccluster]? Attempting to contact "pracdb01" ... done Cluster name "praccluster" is correct. Press Enter to continue: >>> Check <<< This step allows you to run sccheck(1M) to verify that certain basic hardware and software pre-configuration requirements have been met. If sccheck(1M) detects potential problems with configuring this machine as a cluster node, a report of failed checks is prepared and available for display on the screen. Data gathering and report generation can take several minutes, depending on system configuration. Do you want to run sccheck (yes/no) [yes]? no >>> Autodiscovery of Cluster Transport <<< If you are using Ethernet or Infiniband adapters as the cluster transport adapters, autodiscovery is the best method for configuring the cluster transport. However, it appears that scinstall has already been run at least once before on this machine. You can either attempt to autodiscover or continue with the answers that you gave the last time you ran scinstall. Do you want to use autodiscovery anyway (yes/no) [no]? yes Autodiscovery can only be used with Ethernet and Infiniband adapter types. "pracdb01" appears to be configured with an unrecognized adapter. Press Enter to continue: >>> Point-to-Point Cables <<< The two nodes of a two-node cluster may use a directly-connected interconnect. That is, no cluster switches are configured. However, when there are greater than two nodes, this interactive form of scinstall assumes that there will be exactly one switch for each private network. Is this a two-node cluster (yes/no) [no]? Since this is not a two-node cluster, you will be asked to configure one switch for each private network. Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure Page 39 of 62 Press Enter to continue: >>> Cluster Switches <<< All cluster transport adapters in this cluster must be cabled to a "switch". And, each adapter on a given node must be cabled to a different switch. Interactive scinstall requires that you identify one switch for each private network in the cluster. What is the name of the first switch in the cluster [switch1]? What is the name of the second switch in the cluster [switch2]? >>> Cluster Transport Adapters and Cables <<< You must configure the cluster transport adapters for each node in the cluster. These are the adapters which attach to the private cluster interconnect. What is the name of the first cluster transport adapter (help) [ce1]? Will this be a dedicated cluster transport adapter (yes/no) [yes]? Adapter "ce1" is an Ethernet adapter. The "dlpi" transport type will be set for this cluster. Name of the switch to which "ce1" is connected [switch1]? Each adapter is cabled to a particular port on a switch. And, each port is assigned a name. You can explicitly assign a name to each port. Or, for Ethernet and Infiniband switches, you can choose to allow scinstall to assign a default name for you. The default port name assignment sets the name to the node number of the node hosting the transport adapter at the other end of the cable. Use the default port name for the "ce1" connection (yes/no) [yes]? What is the name of the second cluster transport adapter (help) [ce2]? Will this be a dedicated cluster transport adapter (yes/no) [yes]? Adapter "ce2" is an Ethernet adapter. Name of the switch to which "ce2" is connected [switch2]? Use the default port name for the "ce2" connection (yes/no) [yes]? Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure Page 40 of 62 >>> Global Devices File System <<< Each node in the cluster must have a local file system mounted on /global/.devices/node@ before it can successfully participate as a cluster
member. Since the "nodeID" is not assigned until scinstall is run, scinstall will set this up
for you.
You must supply the name of either an already-mounted file system or raw disk
partition which scinstall can use to create the global devices file system. This file system
or partition should be at least 512 MB in size.
If an already-mounted file system is used, the file system must be empty. If a raw disk
partition is used, a new file system will be created for you.
The default is to use /globaldevices.
Is it okay to use this default (yes/no) [yes]?
>>> Automatic Reboot <<< Once scinstall has successfully initialized the Sun Cluster software for this machine, the machine must be rebooted. The reboot will cause this machine to join the cluster for the first time. Do you want scinstall to reboot for you (yes/no) [yes]? >>> Confirmation <<< Your responses indicate the following options to scinstall: scinstall -i \ -C praccluster \ -N pracdb01 \ -A trtype=dlpi,name=ce1 -A trtype=dlpi,name=ce2 \ -m endpoint=:ce1,endpoint=switch1 \ -m endpoint=:ce2,endpoint=switch2 Are these the options you want to use (yes/no) [yes]? Do you want to continue with this configuration step (yes/no) [yes]? Checking device to use for global devices file system ... done Adding node "pracdb03" to the cluster configuration ... skipped Skipped node "pracdb03" - already configured Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure Page 41 of 62 Adding adapter "ce1" to the cluster configuration ... skipped Skipped adapter "ce1" - already configured Adding adapter "ce2" to the cluster configuration ... skipped Skipped adapter "ce2" - already configured Adding cable to the cluster configuration ... skipped Skipped cable - already configured Adding cable to the cluster configuration ... skipped Skipped cable - already configured Copying the config from "pracdb01" ... done Copying the postconfig file from "pracdb01" if it exists ... done No postconfig file found on "pracdb01", continuing done Setting the node ID for "pracdb03" ... done (id=3) Verifying the major number for the "did" driver with "pracdb01" ... done Checking for global devices global file system ... done Updating vfstab ... done Verifying that NTP is configured ... done Initializing NTP configuration ... done Updating nsswitch.conf ... done Adding cluster node entries to /etc/inet/hosts ... done Configuring IP multipathing groups ...done Verifying that power management is NOT configured ... done Unconfiguring power management ... done /etc/power.conf has been renamed to /etc/power.conf.090409113717 Power management is incompatible with the HA goals of the cluster. Please do not attempt to re-configure power management. Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure Page 42 of 62 Ensure network routing is disabled ... done Network routing has been disabled on this node by creating /etc/notrouter. Having a cluster node act as a router is not supported by Sun Cluster. Please do not re-enable network routing. Updating file ("ntp.conf.cluster") on node pracdb01 ... done Updating file ("hosts") on node pracdb01 ... done Updating file ("ntp.conf.cluster") on node pracdb02 ... done Updating file ("hosts") on node pracdb02 ... done Log file - /var/cluster/logs/install/scinstall.log.1879 Rebooting ... updating /platform/sun4u/boot_archive Connection to pracdb03 closed. you have mail root@PBAKB034 # root@pracdb03 # scstat -g -- Resource Groups and Resources -- Group Name Resources --------------- ----------- Resources: qfs-rg qfs-res Resources: prac-fmwk-rg prac-fmwk-rs prac-udlm-rs -- Resource Groups -- Group Name Node Name State Suspended --------------- -------------- ----- ------------- Group: qfs-rg pracdb01 Online No Group: qfs-rg pracdb02 Offline No Group: prac-fmwk-rg pracdb01 Online No Group: prac-fmwk-rg pracdb02 Online No -- Resources -- Resource Name Node Name State Status Message ------------------ --------- ----- -------------- Resource: qfs-res pracdb01 Online Online - Service is online. Resource: qfs-res pracdb02 Offline Offline Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure Page 43 of 62 Resource: prac-fmwk-rs pracdb01 Online Online Resource: prac-fmwk-rs pracdb02 Online Online Resource: prac-udlm-rs pracdb01 Online Online Resource: prac-udlm-rs pracdb02 Online Online root@pracdb03 # scsetup *** Main Menu *** Please select from one of the following options: 1) Quorum 2) Resource groups 3) Data Services 4) Cluster interconnect 5) Device groups and volumes 6) Private hostnames 7) New nodes 8) Other cluster tasks ?) Help with menu options q) Quit Option: 3 *** Data Services Menu *** Please select from one of the following options: 1) Sun Cluster support for Oracle RAC ?) Help q) Return to the Main Menu Option: 1 *** Sun Cluster Support for Oracle RAC *** Sun Cluster provides a support layer for running Oracle Real Application Clusters (RAC) database instances. This option enables you to create and modify the RAC framework resource group for managing the Sun Cluster support for RAC. After the RAC framework resource group has been created, you can use the Sun Cluster system administration tools to administer the RAC framework resource group. Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure Page 44 of 62 Is it okay to continue (yes/no) [yes]? Please select from one of the following options: 1) Create the RAC framework resource group 2) Remove the RAC framework resource group 3) Add nodes to the RAC framework resource group 4) Remove nodes from the RAC framework resource group s) Show the status of RAC framework resource group q) Return to the Data Services Menu Option: 3 Select the nodes to add to the RAC framework resource group 1) pracdb03 q) Done Option: 1 Here is the new list of nodes for "prac-fmwk-rg" resource group: pracdb01 pracdb02 pracdb03 Are you ready to update the list of nodes now (yes/no) [yes]? scrgadm -c -g prac-fmwk-rg -y maximum_primaries=3 -y desired_primaries=3 - y nodelist=pracdb01,pracdb02,pracdb03 Command completed successfully. Press Enter to continue: ========================================================= Check the status: root@pracdb03 # scstat -g -- Resource Groups and Resources -- Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure Page 45 of 62 Group Name Resources ---------- --------- Resources: qfs-rg qfs-res Resources: prac-fmwk-rg prac-fmwk-rs prac-udlm-rs -- Resource Groups -- Group Name Node Name State Suspended ------------- ---------------- --------- ----- --------- Group: qfs-rg pracdb01 Online No Group: qfs-rg pracdb02 Offline No Group: prac-fmwk-rg pracdb01 Online No Group: prac-fmwk-rg pracdb02 Online No Group: prac-fmwk-rg pracdb03 Online No -- Resources -- Resource Name Node Name State Status Message ------------------ ------------------ ----- -------------- Resource: qfs-res pracdb01 Online Online - Service is online. Resource: qfs-res pracdb02 Offline Offline Resource: prac-fmwk-rs pracdb01 Online Online Resource: prac-fmwk-rs pracdb02 Online Online Resource: prac-fmwk-rs pracdb03 Online Online Resource: prac-udlm-rs pracdb01 Online Online Resource: prac-udlm-rs pracdb02 Online Online Resource: prac-udlm-rs pracdb03 Online Online 13. Removing a node from Sun Cluster 3.2 Migrate off resource groups and device groups to other nodes. # scswitch -S -h node2 Delete node2 instances from all resource groups. * Start with scalable resource groups, followed by failover resource groups * Gather configuration information by running the following commands # scrgadm -pv | grep "Res Group Nodelist" Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure Page 46 of 62 # scconf -pv | grep "Node ID" # scrgadm -pvv | grep "NetIfList.*value" * Scalable Resource Group(s) - Set maximum and desired primaries to appropriate number # scrgadm -c -g apache-rg -y maximum_primaries="2" \ -y desired_primaries="2" - Set remaining nodenames to scalable resource group # scrgadm -c -g apache-rg -h node1,node3 - Remove from node list failover resource group with shared address # scrgadm -c -g shareaddr-rg -h node1,node3 * Failover Resource Group(s) - Set remaining nodenames to failover resource group # scrgadm -c -g logical-rg -h node1,node3 # scrgadm -c -g dg1-rg -h node1,node3 - Check for IPMP groups affected # scrgadm -pvv -g logical-rg | grep -i netiflist # scrgadm -pvv -g shareaddr-rg | grep -i netiflist - Update IPMP groups affected # scrgadm -c -j logicalhost \ -x netiflist=sc_ipmp0 @1,sc_ipmp0@3 # scrgadm -c -j shared-address \ -x netiflist=sc_ipmp0@1,sc_ipmp0@3 * Verify changes to resource groups # scrgadm -pvv -g apache-rg | grep -i nodelist # scrgadm -pvv -g apache-rg | grep -i netiflist # scrgadm -pvv -g shareaddr-rg | grep -i nodelist # scrgadm -pvv -g shareaddr-rg | grep -i netiflist # scrgadm -pvv -g logical-rg | grep -i nodelist # scrgadm -pvv -g logical-rg | grep -i netiflist 3. Delete node instances from all disk device groups * Solaris Volume Manager Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure Page 47 of 62 - Check for diskgroups affected # scconf -pv | grep -i "Device group" | grep node2 # scstat -D - Remove node from diskset nodelist # metaset -s setname -d -h nodelist (use -f needed) * VERITAS Volume Manager - Check for diskgroups affected # scconf -pv | grep -i "Device group" | grep node2 # scstat -D - Remove node from diskgroup nodelist # scconf -r -D name=dg1,nodelist=node2 * Raw Disk Device Group - Remember to change desired secondaries to 1 - On any active remaining node(s), identify device groups connected # scconf -pvv | grep node2 | grep "Device group node list" - Determine raw device # scconf -pvv | grep Disk - Disable the localonly property of each Local_Disk # scconf -c -D name=,localonly=false
- Verify disabled localonly property
# scconf -pvv | grep "Disk"
- Remove node from raw device
# scconf -r -D name=rawdisk-device-group,nodelist=node2
Steps 3-5 is not applicable for 2 node clusters.
3. Remove all fully connected quorum devices.
- Check quorum disk information
# scconf -pv | grep Quorum
- Remove quorum disk
# scconf -r -q globaldev=d
4. Remove all fully connected storage devices from node2. Use any method that will block access from
node2 to shared storage
- vxdiskadm to suppress access from VxVM
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 48 of 62
- cfgadm -c unconfigure
- LUN masking/mapping methods if application
- physical cable removal if allowed
5. Add back the quorum devices
# scconf -a -q globaldev=d,node=node1,node=node3
6. Place the node being removed into maintenance state.
* Shutdown node2
# shutdown -g0 -y -i0
* On remaining node
# scconf -c -q node=node2,maintstate
* Verify quorum status
# scstat -q
7. Remove all logical transport connections from node being removed
* Check for interconnect configuration
# scstat -W
# scconf -pv | grep cable
# scconf -pv | grep adapter
* Remove cables configuration
# scconf -r -m endpoint=node2:qfe0
# scconf -r -m endpoint=node2:qfe1
* Remove adapter configuration
# scconf -r -A name=qfe0,node=node2
# scconf -r -A name=qfe1,node=node2
8. For 2 node clusters only, remove quorum disk.
* If not already done so, shutdown node to be uninstalled.
# shutdown -y -g 0
* On remaining node, put node to be removed in maintenence mode
# scconf -c -q node=node2,maintstate
* Place cluster in installmode
# scconf -c -q installmode
* Remove quorum disk
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 49 of 62
# scconf -r -q globaldev=d
* Verify quorum status
# scstat -q
9. Remove node from the cluster software configuration.
* # scconf -r -h node=node2
* # scstat -n
10. Remove cluster software
* If not already done so, shutdown node to be uninstalled.
# shutdown -g0 -y -i0
* Reboot the node into non-cluster mode.
ok> boot -x
* Remove all globally file systems except /global/.devices in /etc/vfstab
* Uninstall Sun Cluster software from the node
# scinstall -r
If it is desirable to remove the last node of the cluster, a complete removal of all resource and device groups
will be required. Please follow the procedure below:
1. Offline all resource groups (RGs):
# scswitch -F -g [,...]
2. Disable all configured resources:
# scswitch -n -j [,...]
3. Remove all resources from the resource group:
# scrgadm -r -j
4. Remove the now empty resource groups:
# scrgadm -r -g
5. Remove global mounts in /etc/vfstab file and "/node@nodeid" mount options.
6. Remove all device groups:
# scstat -D (to get a list of device groups)
# scswitch -F -D device-group-name (to offline device-group)
# scconf -r -D name=device-group-name (to remove/unregister
NOTE: If there are any "rmt" devices, they must be removed with the command:
# /usr/cluster/dtk/bin/dcs_config -c remove -s rmt/1
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 50 of 62
This assumes that you have the package "SUNWscdtk". If you do not, you will need to install it in order to
remove the rmt/XX entries, or the "scinstall -r" will fail.
The SUNWscdtk package is the diagnostics tool for cluster and is not available on the Cluster CD, you need
to get it from the following URL:
http://suncluster.eng/service/tools.html
Uninstall the Sun Cluster 3.X software:
* If not already done so, shutdown node.
# shutdown -g0 -y -i0
* Reboot the node into non-cluster mode.
ok> boot -x
* Finally remove the SunCluster 3.x software using:
# scinstall -r
root@pracdb03 # metaset -s prac
Set name = prac, Set number = 1
Host Owner
pracdb01
pracdb02
pracdb03
Driv Dbase
d1 Yes
d5 Yes
d6 Yes
Take the owner ship of the node
root@pracdb01 # metaset -s prac -t
root@pracdb01 # metaset -s prac
Set name = prac, Set number = 1
Host Owner
pracdb01 Yes
pracdb02
pracdb03
Driv Dbase
d1 Yes
d5 Yes
d6 Yes
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 51 of 62
root@pracdb01 # metaset -s prac -d -h pracdb03 --- its successful
root@pracdb01 # metaset -s prac
Set name = prac, Set number = 1
Host Owner
pracdb01 Yes
pracdb02
Driv Dbase
d1 Yes
d5 Yes
d6 Yes
root@pracdb01 # cldevicegroup list -v prac
Device group Type Node list
------------ ---- ---------
prac SVM pracdb01, pracdb02
Remove a cluster transport path
root@pracdb01 # clinterconnect status
=== Cluster Transport Paths ===
Endpoint1 Endpoint2 Status
--------- --------- ------
pracdb01:nxge2 pracdb02:nxge2 Path online
pracdb01:nxge1 pracdb02:nxge1 Path online
pracdb01:nxge2 pracdb03:ce2 Path online
pracdb01:nxge1 pracdb03:ce1 Path online
pracdb02:nxge2 pracdb03:ce2 Path online
pracdb02:nxge1 pracdb03:ce1 Path online
root@pracdb03 # /usr/cluster/bin/scinstall -r
Verifying that no unexpected global mounts remain in /etc/vfstab ... done
Verifying that no device services still reference this node ... done
Archiving the following to /var/cluster/uninstall/uninstall.1397/archive:
/etc/cluster ...
/etc/path_to_inst ...
/etc/vfstab ...
/etc/nsswitch.conf ...
Updating vfstab ... done
The /etc/vfstab file was updated successfully.
The original entry for /global/.devices/node@3 has been commented out.
And, a new entry has been added for /globaldevices.
Mounting /dev/dsk/c6t500000E0123A9060d0s6 on /globaldevices ... done
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 52 of 62
Cleaning up /globaldevices ... done
updating /platform/sun4u/boot_archive
Attempting to contact the cluster ...
Trying "pracdb01" ... okay
Trying "pracdb02" ... okay
Attempting to unconfigure pracdb03 from the cluster ... failed
Please consider the following warnings:
scrconf: Failed to remove node (pracdb03) - node is in use.
scrconf: Node "pracdb03" is still cabled.
Additional housekeeping may be required to unconfigure
pracdb03 from the active cluster.
Removing the "cluster" switch from "hosts" in /etc/nsswitch.conf ... done
Removing the "cluster" switch from "netmasks" in /etc/nsswitch.conf ... done
** Removing Sun Cluster data services packages **
Removing SUNWscgrepavs..done
Removing SUNWscgrepsrdf..done
Removing SUNWscgreptc..done
Removing SUNWscghb...done
Removing SUNWscgctl..done
Removing SUNWscims...done
Removing SUNWscics...done
Removing SUNWscapc...done
Removing SUNWscdns...done
Removing SUNWschadb..done
Removing SUNWschtt...done
Removing SUNWscs1as..done
Removing SUNWsckrb5..done
Removing SUNWscnfs...done
Removing SUNWscor....done
Removing SUNWscs1mq..done
Removing SUNWscsap...done
Removing SUNWsclc....done
Removing SUNWscsapdb..done
Removing SUNWscsapenq..done
Removing SUNWscsaprepl..Sep 8 12:10:44 pracdb03 genunix: WARNING: ce0: fault detected
external to device;
service degraded
Sep 8 12:10:44 pracdb03 genunix: WARNING: ce0: xcvr addr:0x01 - link down
Sep 8 12:10:44 pracdb03 in.mpathd[240]: The link has gone down on ce0
Sep 8 12:10:44 pracdb03 in.mpathd[240]: NIC failure detected on ce0 of group sc_ipmp1
done
Removing SUNWscsapscs..Sep 8 12:10:47 pracdb03 genunix: WARNING: ce2: fault detected external
to device;
service degraded
Sep 8 12:10:47 pracdb03 genunix: WARNING: ce2: xcvr addr:0x01 - link down
done
Removing SUNWscsapwebas..done
Removing SUNWscsbl...done
Removing SUNWscsyb...done
Removing SUNWscwls...done
Removing SUNWsc9ias..done
Removing SUNWscPostgreSQL..done
Removing SUNWsczone..done
Removing SUNWscdhc...done
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 53 of 62
Removing SUNWscebs...done
Removing SUNWscmqi...done
Removing SUNWscmqs...done
Removing SUNWscmys...done
Removing SUNWscsge...done
Removing SUNWscsaa...done
Removing SUNWscsag...done
Removing SUNWscsmb...done
Removing SUNWscsps...done
Removing SUNWsctomcat..done
Removing SUNWcscgctl..done
Removing SUNWjscgctl..done
Removing SUNWkscgctl..done
Removing SUNWscucm...done
Removing SUNWudlm....done
Removing SUNWudlmr...done
Removing SUNWcvmr....done
Removing SUNWcvm.....done
Removing SUNWscmd....done
** Removing Sun Cluster framework packages **
Removing SUNWkscspmu..done
Removing SUNWksc.....done
Removing SUNWjscspmu..done
Removing SUNWjscman..done
Removing SUNWjsc.....done
Removing SUNWhscspmu..done
Removing SUNWhsc.....done
Removing SUNWfscspmu..done
Removing SUNWfsc.....done
Removing SUNWescspmu..done
Removing SUNWesc.....done
Removing SUNWdscspmu..done
Removing SUNWdsc.....done
Removing SUNWcscspmu..done
Removing SUNWcsc.....done
Removing SUNWsctelemetry..done
Removing SUNWscderby..done
Removing SUNWscspmu..done
Removing SUNWscspmr..done
Removing SUNWjfreechart..done
Removing SUNWscmautilr..done
Removing SUNWscmautil..done
Removing SUNWscmasau..done
Removing SUNWscmasasen..done
Removing SUNWscmasar..done
Removing SUNWscmasa..done
Removing SUNWmdmu....done
Removing SUNWmdmr....done
Removing SUNWscvm....done
Removing SUNWscsam...done
Removing SUNWscsal...done
Removing SUNWscman...done
Removing SUNWscsmf...done
Removing SUNWscgds...done
Removing SUNWscdev...done
Removing SUNWscnmu...done
Removing SUNWscnmr...done
Removing SUNWscrtlh..done
Removing SUNWscr.....done
Removing SUNWscscku..done
Removing SUNWscsckr..done
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 54 of 62
Removing SUNWsczu....done
Removing SUNWsccomzu..done
Removing SUNWsczr....done
Removing SUNWsccomu..done
Removing SUNWscu.....done
Removing the following:
/etc/cluster ...
/dev/global ...
/dev/md/shared ...
/dev/did ...
/devices/pseudo/did@0:* ...
The /etc/inet/ntp.conf file has not been updated.
You may want to remove it or update it after uninstall has completed.
The /var/cluster directory has not been removed.
Among other things, this directory contains
uninstall logs and the uninstall archive.
You may remove this directory once you are satisfied
that the logs and archive are no longer needed.
Log file - /var/cluster/uninstall/uninstall.1397/log
13.1 UNIX Command History executed during node removal
su - oracle
bash
exit
vi /etc/hosts
clresourcegroup list
clresourcegroup prac-fmwk-rg
clresourcegroup show qfs-rg
clresourcegroup show prac-fmwk-rg
scsetup
more /etc/hosts
rsh pracdb03
scsetup
scsetup
exit
samsharefs -R samfs
samsharefs -R Data
samsharefs -s pracdb01 Data
samsharefs -s pracdb03 Data
samsharefs -s pracdb02 Data
ls Data
pwd
clear
scrgadm -p | egrep "SUNW.qfs"
scrgadm -a -g qfs-rg -h pracdb01,pracdb02,pracdb03
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 55 of 62
scrgadm
scrgadm -a -g qfs-rg -h pracdb03
scdidadm -L
scstat -g
clsetup
vi sun.history
rsh pracdb02
samfsconfig
samfsconfig /dev/did/*
df -h
more /etc/opt/SUNWsamfs/mcf
samfsconfig /dev/dsk/*
ps -ef | grep sam-sharefsd
rsh pracdb02
exit
scstat -g
scsetup
scsetup
scstat -g
scstat -i
scconf -pv
scstat -g
exit
clquorum enable d1
clquorum status
scstat -g
scconf -pvv | grep node2 | grep "Device group node list"
scconf -pvv | grep pracdb03 | grep "Device group node list"
scconf -pvv | grep Disk
scconf -C -D name=d11,localonly=false
scconf -C -D name=pracdb03,localonly=false
scconf -c -D name=pracdb03,localonly=false
scconf -pvv | grep group-type
scconf -c -D name=d11,localonly=false
scconf -c -D name=dsk/d11,localonly=false
scconf -c -D name=dsk/d10,localonly=false
scconf -c -D name=dsk/d9,localonly=false
scconf -pvv |grep Disk
scconf -r -D name=dsk/d11,nodelist=pracdb03
scconf -r -D name=dsk/d10,nodelist=pracdb03
scconf -r -D name=dsk/d9,nodelist=pracdb03
scconf -pvv |grep Disk
scconf -pvv | grep node2 | grep "Device group node list"
scconf -pvv | grep pracdb03 | grep "Device group node list"
scconf -pv | grep -i "Device group" | grep pracdb03
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 56 of 62
scconf -pv |grep Quorum
scconf -r -q globaldev=d1
scconf -pv |grep Quorum
scdidadm -L
man scdidadm
scdidadm -r
scdidadm -l
scdidadm -L
scdidadm -c
devfsadm -Cv
scstat -g
scgdevs
devfsadm -Cv
rsh pracdb02
scdidadm -C
scdidadm -r
scdidadm -R
scdidadm -L
clq status
cldev status -s Unmonitored
cldg status
cldg show prac
cldev
cldev clear
scdidadm -L
cldev
cldev status
cldev monitor all
cldev status | more
cldev remove -n pracdb03
cldev unmonitor -n pracdb03
cldev unmonitor -n pracdb03
scconf -pvv |grep pcl3-ipp2 |grep 3
/usr/cluster/bin/scstat -g | grep Online
scrgadm -c -g qfs-rg -y RG_system=false
scstat -g
scrgadm -c -g qfs-rg -y Nodelist=pracdb03
scconf -pvv |grep -i "device" |grep Dev
scstat -q | grep "Device votes"
scconf -c -q node=pracdb03,maintstate
scconf -pvv |grep pracdb03|grep Transport
scconf -r -D name=dsk/d1,nodelist=pracdb03
scstat -q | grep "Device votes"
scconf -pvv |grep -i "device" |grep Dev
Device group name: prac
(prac) Device group type: SVM
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 57 of 62
(prac) Device group failback enabled: no
(prac) Device group node list: pracdb01, pracdb02
(prac) Device group ordered node list: yes
(prac) Device group desired number of secondaries: 1
(prac) Device group diskset name: prac
Device group name: dsk/d8
(dsk/d8) Device group type: Disk
(dsk/d8) Device group failback enabled: no
(dsk/d8) Device group node list: pracdb02
(dsk/d8) Device group ordered node list: no
(dsk/d8) Device group desired number of secondaries: 1
(dsk/d8) Device group device names: /dev/did/rdsk/d8s2
Device group name: dsk/d7
(dsk/d7) Device group type: Disk
(dsk/d7) Device group failback enabled: no
(dsk/d7) Device group node list: pracdb02
(dsk/d7) Device group ordered node list: no
(dsk/d7) Device group desired number of secondaries: 1
(dsk/d7) Device group device names: /dev/did/rdsk/d7s2
Device group name: dsk/d6
(dsk/d6) Device group type: Disk
(dsk/d6) Device group failback enabled: no
(dsk/d6) Device group node list: pracdb02, pracdb01, pracdb03
(dsk/d6) Device group ordered node list: no
(dsk/d6) Device group desired number of secondaries: 1
(dsk/d6) Device group device names: /dev/did/rdsk/d6s2
Device group name: dsk/d4
(dsk/d4) Device group type: Disk
(dsk/d4) Device group failback enabled: no
(dsk/d4) Device group node list: pracdb01
(dsk/d4) Device group ordered node list: no
(dsk/d4) Device group desired number of secondaries: 1
(dsk/d4) Device group device names: /dev/did/rdsk/d4s2
scconf -pvv |grep -i "device" |grep Dev
Device group name: prac
(prac) Device group type: SVM
(prac) Device group failback enabled: no
(prac) Device group node list: pracdb01, pracdb02
(prac) Device group ordered node list: yes
(prac) Device group desired number of secondaries: 1
(prac) Device group diskset name: prac
Device group name: dsk/d8
(dsk/d8) Device group type: Disk
(dsk/d8) Device group failback enabled: no
(dsk/d8) Device group node list: pracdb02
(dsk/d8) Device group ordered node list: no
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 58 of 62
(dsk/d8) Device group desired number of secondaries: 1
(dsk/d8) Device group device names: /dev/did/rdsk/d8s2
Device group name: dsk/d7
(dsk/d7) Device group type: Disk
(dsk/d7) Device group failback enabled: no
(dsk/d7) Device group node list: pracdb02
(dsk/d7) Device group ordered node list: no
(dsk/d7) Device group desired number of secondaries: 1
(dsk/d7) Device group device names: /dev/did/rdsk/d7s2
Device group name: dsk/d6
(dsk/d6) Device group type: Disk
(dsk/d6) Device group failback enabled: no
(dsk/d6) Device group node list: pracdb02, pracdb01, pracdb03
(dsk/d6) Device group ordered node list: no
(dsk/d6) Device group desired number of secondaries: 1
(dsk/d6) Device group device names: /dev/did/rdsk/d6s2
Device group name: dsk/d4
(dsk/d4) Device group type: Disk
(dsk/d4) Device group failback enabled: no
(dsk/d4) Device group node list: pracdb01
(dsk/d4) Device group ordered node list: no
(dsk/d4) Device group desired number of secondaries: 1
(dsk/d4) Device group device names: /dev/did/rdsk/d4s2
cear
clear
scconf -pvv |grep -i "device" |grep Dev
scconf -c -q installmode
scconf -r -q globaldev=d1
scconf -a -q globaldev=d1,node=pracdb01,node=pracdb02
scconf -r -T node=pracdb03
scconf -r -h node=pracdb03
scconf -pvv |grep pracdb03
scdidadm -L
scsetup
scstat -q
scstat -q
ps -ef |grep pmon
clear
scstat -g
scstat -g
clear
scstat -g
scstat -q
dmesg
scconf -pv | more
scstat -i
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 59 of 62
clear
dmesg
exit
scstat -g
scstat -g
df -h
rsh pracdb02
clquorum list
clquorum status
scsta t-q
scstat -q
df -h
dmesg
clnode remove pracdb03
clnode remove pracdb03
scrgadm -pv |grep "qfs-rg Nodelist"
scstat -g
scrgadm -pv |grep "prac-fmwk-rg"
scrgadm -pv |grep "qfs-rg"
scstat -g
scrgadm -C -g qfs-rg -y maximum_primaries="2" -y desired_primaries="2"
scrgadm -pvv -g qfs-rg |grep -i netiflist
scconf -pv |Grep -i "prac" |grep pracdb03
scconf -pv |grep -i "prac" |grep pracdb03
scstat -D
metaset -s prac
scconf -r -D name=prac,nodelist=pracdb03
scconf -pv |grep -i "prac" |grep pracdb03
scstat -D
metaset -s prac
metaset -s prac -t
metaset -s prac
scstat -D
scconf -pv |grep -i "prac" |grep pracdb03
scconf -r -D name=prac,nodelist=pracdb03
scconf -pvv |grep pracdb03 |grep "prac"
scconf -r -D name=prac,nodelist=pracdb03
scconf -pvv | grep Disk
scconf -C -D name=d11,localonly=false
scconf -pvv |grep "Disk"
scconf -C -D name=Local_Disk,localonly=false
scconf -C -D name=dsk/d11,localonly=false
scconf -H
scconf -r -q d11
scconf -pv |grep Quorum
scstat -q
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 60 of 62
scsetup
scstat -q
scconf -a -q globaldev=d1,node=pracdb01,node=pracdb02
scsetup
scconf -a -q name=d1,node=pracdb01,node=pracdb02
scconf -a -q name=d1s2,node=pracdb01,node=pracdb03
scconf -a -q globaldev=d1s2,node=pracdb01,node=pracdb02
scstat -W
scconf -pv | grep cable
scconf -pv | grep adapter
scconf -r -A name=ce2,node=pracdb03
scconf -pv | grep adapter
scconf -r -A name=ce1,node=pracdb03
scconf -pv | grep adapter
scconf -r -A name=nxge2,node=pracdb03
scconf -pv | grep adapter
scconf -r -A name=NULL,node=pracdb03
scrgadm -C -g qfs-rg -h pracdb01,pracdb02
scrgadm -pvv -g qfs-rg |grep -i netiflist
scconf -pv |grep -i "prac
scconf -pv |grep -i "prac" |grep pracdb03
metaset -s prac
scconf -r -D name=prac,nodelist=pracdb03
man scconf
scconf -P
man scconf
scconf -r -T node=pracdb03
man scconf
scconf -r -D name=prac,nodelist=pracdb03
scconf -pvv | grep "Disk"
scconf -r -D name=/dsk/d11,nodelist=pracdb03
scconf -pv |grep Quorum
scconf -r -T node=pracdb03
scconf -c -q node=pracdb03,maintstate
scconf -pv |grep Quorum
scconf -c -q node=pracdb03,maintstat
scconf -c -q node=pracdb03,maintstate
scstat -q
scconf -c -q node=pracdb03,reset
scconf -r -q globaldev=d1
scstat -q
scconf -c -q installmode
scconf -c -q reset
scdidadm -L
scdidadm -Cv
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 61 of 62
scgdevs -Cv
scdidadm -c
scstat -q
scstat -c -q globaldev=d1,maintstate
scconf -c -q globaldev=d1,maintstate
scsetup
scinstall
scsetup
scstat -q
clquorum status
clquorum enable d1
scstat -g
scstat -q
scconf -c -q reset
scstat -q
scdidadm -r
14 Appendix A: Server Information Table
Attribute Node 1 Node 2 Comments
Server type M5000 domain 0 M5000 domain 1
Host Name pracdb01 pracdb02
CPU
Host IP address 10.1.18.97 10.1.18.98
Memory 32GB 32GB
Shared Memory 10GB 10GB At least 5GB less
than total RAM
VIP alias pracdb01-vip pracdb02-vip
VIP IP address 10.1.18.99 10.1.18.100
OS version Solaris 10 10/08 Solaris 10 10/08
Console IP 10.1.18.221 10.1.18.221
Console Passwd
HCAs 2 dual-port HCA
NIC for public and
VIP
bge0 bge0
NICs for private
interface
nxge0,nxge1,nxge2 nxge0,nxge1,nxge2
QFS metadata
server
pracdb01 pracdb01
Boot disk name Disk0 Disk0
Boot disk Slice 0 / /
Boot disk Slice 1 Swap Swap
Boot disk Slice 3 /export/home /export/home
Boot disk Slice 4 /oracleRac /oracleRac
Boot disk Slice 5 /u01 /u01
Boot disk Slice 6 Globaldevices Globaldevices
Solaris Cluster 3.2 Software and Oracle 10g Release 2 RAC Setup Procedure
Page 62 of 62
Boot disk Slice 7 Metadb Metadb
15 Appendix B: Storage Information Table
Attribute Value Comments
Storage type 9990
Console IP Address 10.1.18.87
Disk size 33gb
Lun type RAID 5
Lun1,slice0 33gb
Lun2,slice1 33gb
Lun3,slice2 33gb
Lun4,slice3
Lun5,slice4
Lun6,slice5