Monday, December 20, 2010

Backup commands – usage and examples

Backup commands – ufsdump, tar , cpio
Unix backup and restore can be done using unix commands ufsdump , tar ,
cpio . Though these commands may be sufficient for small setups in
order to take a enterprise backup you have to go in for some custom
backup and restore solutions like Symatic netbackup, EMC networker or
Amanda .
Any backup solution using these commands depends on the type of backup you
are taking and capability of the commands to fulfill the requirement . Following
paragraphs will give you an idea of commands , syntax and examples.

Features of ufsdump , tar , cpio

ufsdump
1. Used for complete file system backup .
2. It copies every thing from regular files in a file system to special character and block device files.
2. It can work on mounted or unmounted file systems.

tar:
1. Used for single or multiple files backup .
2. Can’t backup special character & block device files ( 0 byte files ).
3. Works only on mounted file system.

cpio:
1. Used for single or multiple files backup .
2. Can backup special character & block device files .
3. Works only on mounted file system.
4. Need a list of files to be backed up .
5. Preserve hard links and time stamps of the files .

Identifying the tape device in Solaris

dmesg | grep st

Checking the status of the tape drive

mt -f /dev/rmt/0 status

Backup restore and disk copy with ufsdump :

Backup file system using ufsdump
ufsdump 0cvf /dev/rmt/0 /dev/rdsk/c0t0d0s0
or
ufsdump 0cvf /dev/rmt/0 /usr

To restore a dump with ufsrestore

ufsrestore rvf /dev/rmt/0
ufsrestore in interactive mode allowing selection of individual files and
directories using add , ls , cd , pwd and extract commands .
ufsrestore -i /dev/rmt/0

Making a copy of a disk slice using ufsdump


ufsdump 0f – /dev/rdsk/c0t0d0s7 |(cd /mnt/backup ;ufsrestore xf -)

Backup restore and disk copy with tar :


– Backing up all files in a directory including subdirectories to a tape device (/dev/rmt/0),

tar cvf /dev/rmt/0 *

Viewing a tar backup on a tape

tar tvf /dev/rmt/0

Extracting tar backup from the tape

tar xvf /dev/rmt/0
(Restoration will go to present directory or original backup path depending on
relative or absolute path names used for backup )

Backup restore and disk copy with tar :

Back up all the files in current directory to tape .

find . -depth -print | cpio -ovcB > /dev/rmt/0
cpio expects a list of files and find command provides the list , cpio has
to put these file on some destination and a > sign redirect these files to tape . This can be a file as well .

Viewing cpio files on a tape

cpio -ivtB < /dev/rmt/0

Restoring a cpio backup

cpio -ivcB < /dev/rmt/0

Compress/uncompress files :

You may have to compress the files before or after the backup and it can be done with following commands .
Compressing a file

compress -v file_name
gzip filename

To uncompress a file

uncompress file_name.Z
or
gunzip filename

What is a sticky bit

In Unix sticky bit is permission bit that protects the files within a directory. If the directory has the sticky bit set, a file can be deleted only by the owner of the file, the owner of the directory, or super user. This prevents a user from deleting other users’ files from public directories. A t or T in the access permissions column of a directory listing indicates that the sticky bit has been set, as shown here:

drwxrwxrwt 5 root sys 458 Oct 21 17:04 /public

Sticky bit cab be set by chmod command. You need to assign the octal value 1 as the first number in a series of four octal values.

# chmod 1777 public

Solaris Volume Manager (SVM) – Creating Disk Mirrors

One great thing about Solaris (x86 and Sparc) is that some really cool disk management software is built right in, and it’s called SVM, or Solaris Volume Manager. In previous versions of Solaris it was called Solstice Disksuite, or just Disksuite for short, and it’s still referred to by that name sometimes by people who have been doing this for a long time and therefore worked with that first. The point is that they are the same thing, except SVM is the new version of the tool. Today, we are going to look at what we need to create a mirror out of two disks. Actually, we’ll be creating a mirror between two slices (partitions) of two disks. You can, for example, create a mirror between the root file system slices if you want. Or, if you follow old school rules and break out /var, /usr, etc., you can mirror those as well. You can even mirror your swap slices if you don’t mind the performance hit and need that extra uptime assurance, but we’ll talk about swap in another article. For now, let’s talk about SVM and mirrors.
For the purposes of this article, I am going to assume I have a server with two SCSI hard drives, this is the same process for IDE drives, but the drive device names will be different. The device names I am going to use are /dev/dsk/c0t0d0 and /dev/dsk/c0t1d0, notice that they are the same except for the target (t) number changes, indicating the next disk on the bus. For the slices to use, let’s mirror the root file system on slice 0 and swap on slice 1, sound good? Good.
In order to use SVM, we have to setup what are called “meta databases”. These small databases hold all of the information pertaining to the mirrors that we create, and without them, the machine won’t start. It’s important to note here that it’s not just that the server won’t start without them, the server won’t start (i.e. It goes into single user mode) if you have SVM setup and it can’t find 50% or more of these meta databases. This means that you need to put SVM on your main two drives, or even distribute copies on all local drives if you want, but don’t, for any reason, put any meta databases on removable, external or SAN drives! If you do, and you ever try to start your machine with those drives gone, it won’t start! So keep it on the local drives to make your life easier later.
The disk mirroring is done after the Solaris OS (operating system) has been installed, and therefore we can be sure that the main drive is partitioned correctly since we had to do that as part of the install. However, we need to partition the second disk the same way, the disk label (partition structure) needs to be the same on both disks in the mirror.
We need to pick what partition will hold the meta databases, we already know where / and swap are going to go, and don’t forget that slice 2 is the whole disk or backup partition, so we don’t want to use that for anything. I normally put the meta databases on slice 7. I create a partition of 256MB, which is more than you need, you can use probably 10 if you want, I just like to have some room to grow in the future. It’s important to make sure you get all the slices setup before you do the install! Now that we have determined where all the slices are going to be and what they will hold (slice 0 is / or root, slice 1 is swap, and slice 7 holds the meta information), let’s copy the partition table from disk 0 to disk 1. Luckily, you can accomplish this in one easy step, like this:

#prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2

Do you understand what we are doing here? We are using the prtvtoc (print vtoc, or disklabel) command to print the current partition structure, and piping it into the fmthard (format hard) command to essentially push the partition table from one drive to the other. Be sure you get the drive names absolutely correct, or you WILL destroy data! This will NOT ask you if you are sure, and there is NO WAY to undo this if you get it backwards, or wrong! Ok, the two disks now have matching labels, awesome! Next we need to create the meta databases, which will live on slice 7.

The command will look like this:
#metadb –a –c 3 -f c0t0d0s7 c0t1d0s7
See what we are doing here? We are issuing the metadb command, the -a says to add databases, the -c 3 says to add three copies (in case one gets corrupted), and the -f option is used to create the initial state database. It is also used to force the deletion of replicas below the minimum of one. (The -a and -f options should be used together only when no state databases exist). Lastly on the line we have the disks we want to setup the databases on. Note that we didn’t have to give the absolute of full device path (no /dev/dsk), and we added an s7 to indicate slice 7. Sweet, isn’t it?! Now we have our meta databases setup, so next we need to initialize the root slice on the primary disk. Don’t worry, even though we say initialize, it isn’t destructive. Basically, we tell the SVM software to create a meta device using that root partition, which will then be paired up with another meta device that represents the root partition of the other disk to make the mirror. The only thing here that you have to think about, is what you want to call the meta device. It will be a “d” with a number, and you will have a meta device for each partition, that will be mirrored to create another meta device that is the mirror. Got that? I normally name them all close to each other, something along the lines of d11 for the root slice of disk 1, d12 for the root slice of disk 2, and then d10 for the mirror itself that is made up of disks 1 and 2. That make sense? You can name it anything you want, and some folk use complicated naming schemes that involve disk ids and parts of the serial number, but I really don’t see the point in all that. The commands to initialize the root slices for both disks are as follows:

#metainit -f d11 1 1 c0t0d0s0
#metainit -f d12 1 1 c0t1d0s0
See how easy that is? We run the metainit command, using the -f again since we already have an operating system in place, we specify d11 and d12 respectively, and we want 1 physical device in the meta device (the 1 1 tells metainit to create a one to one concatenation of the disk). Again, like before, we specify the target disk, and again with no absolute device name. Take a look though and notice that we did change from s7 to s0, since we are trying to mirror slice 0 which is our root slice. Now that we have initialized the root slices of both disks, and created the two meta devices, we want to create the meta device that will be the mirror. This command will look like this:

#metainit d10 -m d11
Again, we use the metainit command, this time using -m to indicate we are creating a mirror called d10, and attaching d11. Whoah! Wait a minute pardner! Where’s d12 at you are asking? I know you are, admit it, you’re that good! I am glad you noticed. We actually will add that to the mirror (d10) later, after we do a couple other things and reboot the machine. This is a good spot to mention the metastat command. This command will show you the current status of all of your meta devices, like the mirror itself, and all of the disks in the mirror. It’s a good idea to run this once in awhile to make sure that you don’t have a failed disk that you don’t know about. For my systems, I have a script that runs from cron to check at regular intervals and email me when it sees a problem. Before we can reboot and attach d12, we have to issue the metaroot command that will setup d10 as our boot device (essentially it goes and changes the /etc/vfstab for you). Remember that this is only for a boot device. If you were mirroring two other drives (like in a server that has four disks) that you aren’t booting off of, you don’t metaroot those. The command looks like so:

#metaroot d10
How simple. That’s it! Well, that’s it for the root slice anyway. We’ll run through those same command to mirror the swap devices, which I will put down for you here with some notes, but without all the explanation. We’ll be using numbers in the 20′s for our devices, d20, d21 and d22. See if you can follow along:
(*Note: At this point, we already have the label and meta databases in place, so the prtvtoc and metadb steps aren’t needed.)

Initialize the swap slices:

metainit d21 1 1 c0t0d0s1=Notice we changed to
#metainit d22 1 1 c0t1d0s1=slice 1(s1) for swapNow,initialize the mirror
#metainit d20 -m d21
==============================================================
And there you go, at least for the meta device part. One thing to remember though, whether you are doing swap, or a separate set of disks, if you don’t run that metaroot command (like if it’s not the boot disk), you have to change the /etc/vfstab yourself or it won’t work. Here is where we point out a device name difference for meta devices. Instead of /dev/dsk for your mirror, the meta device is now located at /dev/md/dsk/ and then the meta device name. So, our root mirror is /dev/md/dsk/d10 and our swap mirror is /dev/md/dsk/d20. Simple huh? So for your swap mirror, you would edit /etc/vfstab and change the swap device from whatever it is now, to your meta device, which is /dev/md/dsk/d20 in this example. The rest of the entry stays the same, it’s just a different device name. Lastly, in order to make all this magic work, you have to restart the machine. Once it comes back up, you can attach the second drives of the mirror with this command:

For the root mirror
#metattach d10 d12
For the swap mirror
#metattach d20 d22

Once this is done, you should be able to see the mirrors re-syncing when you run the metastat command. Just run metastat, and for each mirror meta device, you should see the re-syncing status for a while. Once the sync is done, it should change to OK.

Example metastat output for d10 after the attachment:

d10: Mirror
Submirror 0: d11
State: Okay
Submirror 1: d12
State: Resyncing
Resync in progress: 0 % done
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 279860352 blocks (133 GB)

d11: Submirror of d10
State: Okay
Size: 279860352 blocks (133 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t0d0s0 0 No Okay Yes

d12: Submirror of d10
State: Resyncing
Size: 279860352 blocks (133 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t1d0s0 0 No Okay Yes

There you have it, the output from the metastat command shows the meta device that is the mirror, d10, and the meta devices that make up the mirror. In addition, it shows the status of the mirror and devices which is real handy. For example, in the script that I use to monitor my disks, I use the following command to tell me if any meta devices have any status other than Okay. Check it out:

#metastat | grep State | egrep -v Okay

If I get any information back from that command, I just have the script email it to me so I know what is going on. Cool, huh?

We just had the long version, so here I am going to put the commands together, so you can simply see them all at once, and even use this as a reference. See what you think:

#prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2

#metadb –a –c 3 -f c0t0d0s7 c0t1d0s7
#metainit -f d11 1 1 c0t0d0s0
#metainit -f d12 1 1 c0t1d0s0
#metainit d10 -m d11
#metaroot d10
#metainit d21 1 1 c0t0d0s1
#metainit d22 1 1 c0t1d0s1
#metainit d20 -m d21
>REBOOT<
#metattach d10 d12
#metattach d20 d22

There you have it! That’s how easy it is to create disk mirrors and protect your data with SVM. I hope you enjoyed this article and found it useful!

HOWTO: Mirrored root disk on Solaris

0. Partition the first disk
# format c0t0d0
Use the partition tool (=> "p , p "!) to setup the slices. We assume the following slice setup afterwards:
# Tag Flag Cylinders Size Blocks
- ---------- ---- ------------- -------- --------------------
0 root wm 0 - 812 400.15MB (813/0/0) 819504
1 swap wu 813 - 1333 256.43MB (521/0/0) 525168
2 backup wm 0 - 17659 8.49GB (17660/0/0) 17801280
3 unassigned wm 1334 - 1354 10.34MB (21/0/0) 21168
4 var wm 1355 - 8522 3.45GB (7168/0/0) 7225344
5 usr wm 8523 - 14764 3.00GB (6242/0/0) 6291936
6 unassigned wm 14765 - 16845 1.00GB (2081/0/0) 2097648
7 home wm 16846 - 17659 400.15MB (813/0/0) 819504
1. Copy the partition table of the first disk to its future mirror disk
# prtvtoc /dev/rdsk/c0t0d0s2 fmthard -s - /dev/rdsk/c0t1d0s2
2. Create at least two state database replicas on each disk
# metadb -a -f -c 2 c0t0d0s3 c0t1d0s3
Check the state of all replicas with metadb:
# metadb
Notes:
A state database replica contains configuration and state information about the meta devices. Make sure that always at least 50% of the replicas are active!

3. Create the root slice mirror and its first submirror
# metainit -f d10 1 1 c0t0d0s0
# metainit -f d20 1 1 c0t1d0s0
# metainit d30 -m d10
Run metaroot to prepare /etc/vfstab and /etc/system (do this only for the root slice!):
# metaroot d30
4. Create the swap slice mirror and its first submirror
# metainit -f d11 1 1 c0t0d0s1
# metainit -f d21 1 1 c0t1d0s1
# metainit d31 -m d11
5. Create the var slice mirror and its first submirror
# metainit -f d14 1 1 c0t0d0s4
# metainit -f d24 1 1 c0t1d0s4
# metainit d34 -m d14
6. Create the usr slice mirror and its first submirror
# metainit -f d15 1 1 c0t0d0s5
# metainit -f d25 1 1 c0t1d0s5
# metainit d35 -m d15
7. Create the unassigned slice mirror and its first submirror
# metainit -f d16 1 1 c0t0d0s6
# metainit -f d26 1 1 c0t1d0s6
# metainit d36 -m d16
8. Create the home slice mirror and its first submirror
# metainit -f d17 1 1 c0t0d0s7
# metainit -f d27 1 1 c0t1d0s7
# metainit d37 -m d17
9. Edit /etc/vfstab to mount all mirrors after boot, including mirrored swap

/etc/vfstab before changes:
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/dsk/c0t0d0s1 - - swap - no -
/dev/md/dsk/d30 /dev/md/rdsk/d30 / ufs 1 no logging
/dev/dsk/c0t0d0s5 /dev/rdsk/c0t0d0s5 /usr ufs 1 no ro,logging
/dev/dsk/c0t0d0s4 /dev/rdsk/c0t0d0s4 /var ufs 1 no nosuid,logging
/dev/dsk/c0t0d0s7 /dev/rdsk/c0t0d0s7 /home ufs 2 yes nosuid,logging
/dev/dsk/c0t0d0s6 /dev/rdsk/c0t0d0s6 /opt ufs 2 yes nosuid,logging
swap - /tmp tmpfs - yes -
/etc/vfstab after changes:
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/md/dsk/d31 - - swap - no -
/dev/md/dsk/d30 /dev/md/rdsk/d30 / ufs 1 no logging
/dev/md/dsk/d35 /dev/md/rdsk/d35 /usr ufs 1 no ro,logging
/dev/md/dsk/d34 /dev/md/rdsk/d34 /var ufs 1 no nosuid,logging
/dev/md/dsk/d37 /dev/md/rdsk/d37 /home ufs 2 yes nosuid,logging
/dev/md/dsk/d36 /dev/md/rdsk/d36 /opt ufs 2 yes nosuid,logging
swap - /tmp tmpfs - yes -
Notes:
The entry for the root device (/) has already been altered by the metaroot command we executed before.

10. Reboot the system
# lockfs -fa && init 6
11. Attach the second submirrors to all mirrors
# metattach d30 d20
# metattach d31 d21
# metattach d34 d24
# metattach d35 d25
# metattach d36 d26
# metattach d37 d27
Notes:
This will finally cause the data from the boot disk to be synchronized with the mirror drive.
You can use metastat to track the mirroring progress.

12. Change the crash dump device to the swap metadevice
# dumpadm -d `swap -l tail -1 awk '{print $1}'
13. Make the mirror disk bootable
# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0
Notes:
This will install a boot block to the second disk.

14. Determine the physical device path of the mirror disk
# ls -l /dev/dsk/c0t1d0s0
... /dev/dsk/c0t1d0s0 -> ../../devices/pci@1f,4000/scsi@3/sd@1,0:a
15. Create a device alias for the mirror disk
# eeprom "nvramrc=devalias mirror /pci@1f,4000/scsi@3/disk@1,0"
# eeprom "use-nvramrc?=true"
Add the mirror device alias to the Open Boot parameter boot-device to prepare the case of a problem with the primary boot device.
# eeprom "boot-device=disk mirror cdrom net"
You can also configure the device alias and boot-device list from the Open Boot Prompt (OBP a.k.a. ok prompt):
ok nvalias mirror /pci@1f,4000/scsi@3/disk@1,0
ok use-nvramrc?=true
ok boot-device=disk mirror cdrom net
Notes:
From the OBP, you can use boot mirror to boot from the mirror disk.
On my test system, I had to replace sd@1,0:a with disk@1,0. Use devalias on the OBP prompt to determine the correct device path.

Monday, November 15, 2010

A quick guide to setting up imap on solaris

A quick guide to setting up imap on solaris


Installing packages

Get the following packages from www.sunfreeware.com:

openssl-0.9.8e-sol10-sparc-local
imap-2006e-sol10-sparc-local

and install both of them

/etc/services configuration

Ensure the following /etc/services entries are present
pop2 109/tcp pop pop-2 # Post Office Protocol - V2
pop3 110/tcp # Post Office Protocol - Version 3
imap 143/tcp imap2 # Internet Mail Access Protocol v2
imaps 993/tcp

inetd configuration

The inetd configuration on Solaris 10 is a pain to setup now that you cant just edit inetd.conf, however you can use inetd.conf as an input to inetconv.

This is the easiest way !

Add in the following to inetd.conf

pop stream tcp nowait root /usr/local/sbin/ipop2d ipop2d
pop3 stream tcp nowait root /usr/local/sbin/ipop3d ipop3d
imap stream tcp nowait root /usr/local/sbin/imapd imapd
pop3s stream tcp nowait root /usr/local/sbin/ipop3d ipop3d
imaps stream tcp nowait root /usr/local/sbin/imapd imapd

Then run
#inetconv -f

to create the service entries. Then use inetadm to check they are ok.

root@host: inetadm | egrep "pop|imap"


enabled online svc:/network/pop3/tcp:default
enabled online svc:/network/imap/tcp:default
enabled online svc:/network/pop3s/tcp:default
enabled online svc:/network/imaps/tcp:default
enabled online svc:/network/pop/tcp:default

SSL configuration

Then you need to create SSL certificate as imapd will not accept plain text authentication

If you dont you will see the following type of errors in syslog when you try to connect with a plain text passwd.

Mar 29 09:56:58 myserver imapd[6959]: [ID 210418 auth.notice] Login disabled user=user1 auth=user1 host=myotherserver.example.com [10.11.12.13]



Use openssl to create certificate for imap.



cd /usr/local/ssl/certs

/usr/local/ssl/bin/openssl req -new -x509 -nodes -out imapd.pem \

-keyout imapd.pem -days 365

This should create an imapd.pem certificate file in the cert directory

Client configuration

Then in the account options on your mail client (netscape, outlook etc) choose the option to authenticate using SSL.

Thursday, November 11, 2010

Recovering a System to a Different Machine Using ufsrestore and the Solaris 9 or 10 OS

Here is a procedure for recovering a failed server to another server on a different platform. This failed server is running the Solaris 9 or 10 Operating System for SPARC platforms and Solaris Volume Manager. The procedure could be modified to work for a system that runs the Solaris OS for x86 platforms.




Scenario: A couple of servers share one tape drive connecting to server Tapehost. The servers are backed up to tapes using ufsdump. One old server, Myhost, which runs Solaris Volume Manager, fails -- and you want to restore Myhost to a new server on a different platform.



Part A: Restore From Remote Tape to the New Machine

If there is more than one ufsdump image on a tape, you must write down which image is for which file system backup of Myhost right after the backup occurs.



Here, I assume that the root file system's full backup of Myhost is the third image on the tape.



1. Position the tape in Tapehost (10.1.1.47) for the root file system's full backup image of Myhost:



root@Tapehost# mt -f /dev/rmt/0n fsf 3

2. On Myhost (10.1.1.46), boot into single-user mode from a CD-ROM of the same OS version:



ok boot cdrom -s

3. Enable the network port, for example, bge0:



# ifconfig bge0 10.1.1.46 up

4. Using the format command, prepare partitions for file systems. The basic procedure is to format the disk, select the disk, create partitions, and label the disk.



5. Create a new root file system on a partition, for example, /dev/rdsk/c1t0d0s0, and mount it to /mnt:



# newfs /dev/rdsk/c1t0d0s0

# mount /dev/dsk/c1t0d0s0 /mnt

6. Restore the full backup of the root file system from tape:



# cd /mnt

# ufsrestore rf 10.1.1.47:/dev/rmt/0n

7. If you want to restore the incremental backup, re-position the remote tape and use the ufsrestore command again. After restoring, remove the restoresymtable file.



# rm restoresymtable

8. Install boot block in the root disk:



# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk

/dev/rdsk/c1t0d0s0

9. Unmount the root file system:



# cd /

# umount /mnt

10. Repeat steps 1, 5, 6, 7, and 9 to restore other file systems.



11. Mount the root file system, /dev/dsk/c1t0d0s0 to /mnt, and edit /mnt/etc/vfstab so that each mount point mounts in the correct partition.



For example, change the following line from this:



/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no -

To this:



/dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 / ufs 1 no -

Part B: Remove Solaris Volume Manager Information

Use the procedure below to remove the Solaris Volume Manager information.



Note: Another way to clear out Solaris Volume Manager, is to reboot into single-user mode and use metaclear, metadb -d. But with the Solaris 10 OS, the mdmonitor service will complain when the system first reboots. However, the complaints will be gone after the Solaris Volume Manager information is cleared out.



1. If Myhost had a mirrored root file system, there is an entry similar to rootdev:/pseudo/md@0:0,0,blk in the /etc/system file. After performing the procedure in Part A, remove this entry from /mnt/etc/system. Do not just comment it out.



2. All of the Solaris Volume Manager information is stored in three files: /kernel/drv/md.conf, /etc/lvm/mddb.cf, and /etc/lvm/md.cf. So to clear out Solaris Volume Manager, overwrite these files with the files from a system without Solaris Volume Manager.



Note: If you intend to configure the meta devices the same way they were, configuration information is in the /etc/lvm/md.cf file. So take notes before this file is overwritten.



# cp /kernel/drv/md.conf /mnt/kernel/drv/md.conf

# cp /etc/lvm/mddb.cf /mnt/etc/lvm/mddb.cf

# cp /etc/lvm/md.cf /mnt/etc/lvm/md.cf



Part C: Reconfigure /devices, /dev, and /etc/path_to_inst

1. Because the new server has different hardware than the old server, the device trees will change too. Update the /etc/path_to_inst file to reflect this change.



# rm -r /mnt/dev/*

# rm -r /mnt/devices/*

# devfsadm -C -r /mnt -p /mnt/etc/path_to_inst

2. Reboot the system from the root disk:



# init 6

If it does not reboot, you can use setenv boot-device from OpenBoot PROM or eeprom boot-device from the OS to set up the root disk as boot disk.

Monday, September 20, 2010

Simple Guide on Installing 2 Nodes Oracle 10g RAC on Solaris 10 64bit

Introduction

Network Configuration (Hostname and IP address)
Create Oracle groups and Oracle user
Prepare disk for Oracle binaries (Local disk)
iSCSI Configuration
Prepare disk for OCR, Voting and ASM
Setting Kernel Parameters
Check and install required package
Installing Oracle Clusterware
Installing Oracle Database 10g Software
Create ASM instance and ASM diskgroup


Introduction

These article are intended for people who have basic knowledge of Oracle RAC. This article does not detail everything required to be understood in order to configure a RAC database. Please refer to Oracle documentation for explanation.

This article, however, focuses on putting together your own Oracle RAC 10g environment for development and testing by using Solaris servers and a low cost shared disk solution; iSCSI by using Openfiler (Openfiler installation and disk management is not covered in this article).

The two Oracle RAC nodes will be configured as follows:

Oracle Database Files
RAC Node Name Instance Name Database Name $ORACLE_BASE File System for DB Files
soladb1 sola1 sola /oracle ASM
soladb2 sola2 sola /oracle ASM

Oracle Clusterware Shared Files
File Type File Name iSCSI Volume Name Mount Point File System
Oracle Cluster Registry /dev/rdsk/c2t3d0s2 ocr RAW
CRS Voting Disk /dev/rdsk/c2t4d0s2 vot RAW

The Oracle Clusterware software will be installed to /oracle/product/10.2.0/crs_1 on both the nodes that make up the RAC cluster. All of the Oracle physical database files (data, online redo logs, control files, archived redo logs) will be installed to shared volumes being managed by Automatic Storage Management (ASM).

1. Network Configuration (Hostname and IP address)

Perform the following network configuration on both Oracle RAC nodes in the cluster

Both of the Oracle RAC nodes should have one static IP address for the public network and one static IP address for the private cluster interconnect. The private interconnect should only be used by Oracle to transfer Cluster Manager and Cache Fusion related data along with data for the network storage server (Openfiler). Although it is possible to use the public network for the interconnect, this is not recommended as it may cause degraded database performance (reducing the amount of bandwidth for Cache Fusion and Cluster Manager traffic). For a production RAC implementation, the interconnect should be at least gigabit (or more) and only be used by Oracle as well as having the network storage server on a separate gigabit network.

The following example is from soladb1:

i. Update entry of /etc/hosts

# cat /etc/hosts

127.0.0.1 localhost


# Public Network (e1000g0)
192.168.2.100 soladb1 loghost
192.168.2.101 soladb2

# Public Virtual IP (VIP) addresses
192.168.2.104 soladb1-vip
192.168.2.105 soladb2-vip

# Private Interconnect (e1000g1)
10.0.0.100 soladb1-priv
soladb2-priv

ii. Edit name of server hostname by update /etc/nodename file
# cat /etc/nodename
soladb1
iii. Update/add file /etc/hostname. to
# cat hostname.e1000g0
soladb1

# cat hostname.e1000g1
soladb1-priv

Once the network is configured, you can use the ifconfig command to verify everything is working. The following example is from soladb1:

# ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
inet 192.168.2.100 netmask ffffff00 broadcast 192.168.2.255
ether 0:50:56:99:45:20
e1000g1: flags=1000843 mtu 1500 index 3
inet 10.0.0.100 netmask ff000000 broadcast 10.255.255.255
ether 0:50:56:99:4f:a1


Adjusting Network Settings
The UDP (User Datagram Protocol) settings affect cluster interconnect transmissions. If the buffers set by these parameters are too small, then incoming UDP datagrams can be dropped due to insufficient space, which requires send-side retransmission. This can result in poor cluster performance.

On Solaris, the UDP parameters are udp_recv_hiwat and udp_xmit_hiwat. The default values for these paramaters on Solaris 10 are 57344 bytes. Oracle recommends that you set these parameters to at least 65536 bytes.

To see what these parameters are currently set to, enter the following commands:
# ndd /dev/udp udp_xmit_hiwat
# ndd /dev/udp udp_recv_hiwat

To set the values of these parameters to 65536 bytes in current memory, enter the following commands:
# ndd -set /dev/udp udp_xmit_hiwat 65536
# ndd -set /dev/udp udp_recv_hiwat 65536

We need to write a startup script udp_rac in /etc/init.d with the following contents to set to these values when the system boots.

#!/sbin/sh
case "$1" in
'start')
ndd -set /dev/udp udp_xmit_hiwat 65536
ndd -set /dev/udp udp_recv_hiwat 65536
;;
'state')
ndd /dev/udp udp_xmit_hiwat
ndd /dev/udp udp_recv_hiwat
;;
*)
echo "Usage: $0 { start | state }"
exit 1
;;
esac

We now need to create a link to this script in the /etc/rc3.d directory.

# ln -s /etc/init.d/udp_rac /etc/rc3.d/S86udp_rac


2. Create Oracle groups and Oracle user
Perform the following task on all Oracle RAC nodes in the cluster
We will create the dba group and the oracle user account along with all appropriate directories.

# mkdir -p /oracle
# groupadd –g 501 oinstall
# groupadd –g 502 dba

# useradd -s /usr/bin/bash -u 500 -g 501 -G 502 -d /oracle oracle -c "Oracle Software Owner" oracle
# chown -R oracle:dba /oracle
# passwd oracle


Modify Oracle user environment variable
Perform the following task on all Oracle RAC nodes in the cluster


After creating the oracle user account on both nodes, ensure that the environment is setup correctly by using the following .bash_profile (Please note that the .bash_profile will not exist on Solaris; you will have to create it).

The following example is from soladb1:

# su – oracle
$ cat .bash_profile
PATH=/usr/sbin:/usr/bin
export ORACLE_SID=sola1
export ORACLE_BASE=/oracle
export ORACLE_HOME=/oracle/product/10.2.0/db_1
export ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs_1
export PATH=$PATH:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin






3. Prepare disk for Oracle binaries (Local disk)

Perform the following task on all Oracle RAC nodes in the cluster


1. Format the disk

# format
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0
/pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number): 1

format> fdisk
No fdisk table exists. The default partition for the disk is:
a 100% "SOLARIS System" partition
Type "y" to accept the default partition, otherwise type "n" to edit the
partition table.
Y

format> p
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
! - execute , then return
quit

partition> p (print - display the current table)
Current partition table (original):
Total disk cylinders available: 2607 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 2606 19.97GB (2607/0/0) 41881455
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
9 unassigned wm 0 0 (0/0/0) 0partition> label

partition> label
Ready to label disk, continue? Y

2. Create solaris file system
# newfs /dev/dsk/c1t1d0s2

3. Add entry to /etc/vfstab
# cat /etc/vfstab
/dev/dsk/c1t1d0s2 /dev/rdsk/c1t1d0s2 /oracle ufs - yes -

4. mount the filesystem
# mkdir /oracle
# mount /oracle

5.Change Owner of /oracle
# chown -R oracle:oinstall /oracle


4. iSCSI Configuration

Perform the following task on all Oracle RAC nodes in the cluster


In this article, we will be using the Static Config method. We first need to verify that the iSCSI software packages are installed on our servers before we can proceed further.

# pkginfo SUNWiscsiu SUNWiscsir
system SUNWiscsir Sun iSCSI Device Driver (root)
system SUNWiscsiu Sun iSCSI Management Utilities (usr)

After verifying that the iSCSI software packages are installed to the client machines (soladb1, soladb2) and that the iSCSI Target (Openfiler) is configured, run the following from the client machine to discover all available iSCSI LUNs. Note that the IP address for the Openfiler network storage server is accessed through the private network and located at the address 10.0.0.108

Configure the iSCSI target device to be discovered static by specifying IQN, IP Address and port no:

# iscsiadm add static-config iqn.2006-01.com.openfiler:tsn.2fc90b6b9c73,10.0.0.108:3260

Listing Current Discovery Settings
# iscsiadm list discovery
Discovery:
Static: disable
Send Targets: disabled
iSNS: disabled

The iSCSI connection is not initiated until the discovery method is enabled. This is enabled using the following command:

# iscsiadm modify discovery --static enable

Create the iSCSI device links for the local system. The following command can be used to do this:

# devfsadm -i iscsi

To verify that the iSCSI devices are available on the node, we will use the format command. The output of the format command should look like the following:

# format
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0
/pci@0,0/pci15ad,1976@10/sd@1,0
2. c2t3d0
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,0
3. c2t4d0
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,1
4. c2t5d0
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,2
5. c2t6d0
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,3
6. c2t7d0
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,4
Specify disk (enter its number):




5. Prepare disk for OCR, Voting and ASM

Perform the following task on one(1) of the Oracle RAC nodes in the cluster


Now, we need to create partitions on the iSCSI volumes. The main point is that when formatting the devices to be used for the OCR and the Voting Disk files, the disk slices to be used must skip the first cylinder (cylinder 0) to avoid overwriting the disk VTOC (Volume Table of Contents). The VTOC is a special area of disk set aside for aside for storing information about the disk’s controller, geometry and slices.

Oracle Shared Drive Configuration
File System Type iSCSI Target
(short) Name Size Device Name ASM Dg Name File Types
RAW ocr 300 MB /dev/rdsk/c2t3d0s2 Oracle Cluster Registry (OCR) File
RAW vot 300 MB /dev/rdsk/c2t4d0s2 Voting Disk
RAW asmspfile 30 MB /dev/rdsk/c2t7d0s2 ASM SPFILE
ASM asm1 14 GB /dev/rdsk/c2t5d0s2 DATA Oracle Database Files
ASM asm2 14 GB /dev/rdsk/c2t6d0s2 ARCH Oracle Database Files

Perform below operation for all the disk from the solaris1 node only using format command.


# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0
/pci@0,0/pci15ad,1976@10/sd@1,0
2. c2t3d0
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,0
3. c2t4d0
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,1
4. c2t5d0
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,2
5. c2t6d0
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,3
6. c2t7d0
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,4
Specify disk (enter its number): 2
selecting c2t3d0
[disk formatted]

FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
! - execute , then return
quit

format> partition
Please run fdisk first
format> fdisk
No fdisk table exists. The default partition for the disk is:

a 100% "SOLARIS system" partition

Type "y" to accept the default partition, otherwise type "n" to edit the partition table.
y
format> partition
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
! - execute , then return
quit

partition> print
Current partition table (unnamed):
Total disk cylinders available: 508 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 507 508.00MB (508/0/0) 1040384
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 1.00MB (1/0/0) 2048
9 unassigned wm 0 0 (0/0/0) 0

partition> 2
Part Tag Flag Cylinders Size Blocks
2 unassigned wm 0 - 507 508.00MB (508/0/0) 1040384

Enter partition id tag[backup]:

Enter partition permission flags[wm]:
Enter new starting cyl[0]: 5
Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: $
partition> label
Ready to label disk, continue? y

partition> quit

Repeat this operation for all the iSCSI disk.Setting Device Permissions

The devices we will be using for the various components of this article (e.g. the OCR and the voting disk) must have the appropriate ownership and permissions set on them before we can proceed to the installation stage. We will the set the permissions and ownerships using the chown and chmod commands as follows: (this must be done as the root user)

# chown root:oinstall /dev/rdsk/c2t3d0s2
# chmod 660 /dev/rdsk/c2t1d0s1
# chown oracle:oinstall /dev/rdsk/c2t4d0s2
# chmod 660 /dev/rdsk/c2t4d0s2
# chown oracle: oinstall /dev/rdsk/c2t7d0s2
# chown oracle: oinstall /dev/rdsk/c2t5d0s2
# chown oracle: oinstall /dev/rdsk/c2t6d0s2

These permissions will be persistent accross reboots. No further configuration needs to be performed with the permissions.


6. Setting Kernel Parameters
In Solaris 10, there is a new way of setting kernel parameters. The old Solaris 8 and 9 way of setting kernel parameters by editing the /etc/system file is deprecated. A new method of setting kernel parameters exists in Solaris 10 using the resource control facility and this method does not require the system to be re-booted for the change to take effect.

Create a default project for the oracle user.
# projadd -U oracle -K "project.max-shm-memory=(priv,4096MB,deny)" user.oracle


Modify the max-shm-memory Parameter
# projmod -s -K "project.max-shm-memory=(priv,4096MB,deny)" user.oracle


Modify the max-sem-ids Parameter
# projmod -s -K "project.max-sem-ids=(priv,256,deny)" user.oracle

Check the Parameters as User oracle
$ prctl -i project user.oracle

Configure RAC Nodes for Remote Access
Perform the following configuration procedures on both Oracle RAC nodes in the cluster.

Before you can install and use Oracle RAC, you must configure either secure shell (SSH) or remote shell (RSH) for the oracle user account both of the Oracle RAC nodes in the cluster. The goal here is to setup user equivalence for the oracle user account. User equivalence enables the oracle user account to access all other nodes in the cluster without the need for a password. This can be configured using either SSH or RSH where SSH is the preferred method.
Perform below operation as User oracle to setup RSH between all nodes.

# su – oracle
$ cd
$ vi .rhosts
+

7. Check and install required package

Perform the following checks on all Oracle RAC nodes in the cluster


The following packages must be installed on each server before you can continue. To check whether any of these required packages are installed on your system, use the pkginfo -i command as follows:
# pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibmr SUNWlibm SUNWsprot SUNWtoo SUNWi1of SUNWi1cs SUNWi15cs SUNWxwfnt SUNWxwplt SUNWmfrun SUNWxwplr SUNWxwdv SUNWbinutils SUNWgcc SUNWuiu8

If you need to install any of the above packages, use the pkgadd –d command. E.g.
# pkgadd -d /cdrom/sol_10_1009_x86/Solaris_10/Product -s /var/spool/pkg SUNWi15cs
# pkgadd SUNWi15cs

8. Installing Oracle Clusterware
Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (soladb1). The Oracle Clusterware software will be installed to both of the Oracle RAC nodes in the cluster by the OUI.

Using xstart or any xterm client, login as Oracle user and start the installation.

$ ./runInstaller.sh

Screen Name Response
Welcome Screen Click Next
Specify Inventory directory and credentials Accept the default values:
Inventory directory: /oracle/oraInventory
Operating System group name: oinstall
Specify Home Details Set the Name and Path for the ORACLE_HOME (actually the $ORA_CRS_HOME that I will be using in this article) as follows:
Name: OraCrs10g_home
Path: /oracle/product/10.2.0/crs_1
Product-Specific Prerequisite Checks The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle Clusterware software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox. For my installation, all checks passed with no problems.

Click Next to continue.

Specify Cluster Configuration Cluster Name: crs

Public Node Name Private Node Name Virtual Node Name
soladb1 soladb1-priv soladb1-vip
soladb2 soladb2-priv soladb2-vip

Specify Network Interface Usage Interface Name Subnet Interface Type
e1000g0 192.168.2.0 Public
e1000g1 10.0.0.0 Private

Specify OCR Location Starting with Oracle Database 10g Release 2 (10.2) with RAC, Oracle Clusterware provides for the creation of a mirrored OCR file, enhancing cluster reliability. For the purpose of this example, I did not choose to mirror the OCR file by using the option of “External Redundancy”:

Specify OCR Location: /dev/rdsk/c2t3d0s2

Specify Voting Disk Location For the purpose of this example, I did not choose to mirror the voting disk by using the option of “External Redundancy”:

Voting Disk Location: /dev/rdsk/c2t4d0s2

Summary Click Install to start the installation!
Execute Configuration Scripts After the installation has completed, you will be prompted to run the orainstRoot.sh and root.sh script. Open a new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), as the “root” user account.

Navigate to the / oracle/oraInventory directory and run orainstRoot.sh ON ALL NODES in the RAC cluster.


--------------------------------------------------------------------------------
Within the same new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), stay logged in as the “root” user account.

Navigate to the /oracle/product/10.2.0/crs_1 directory and locate the root.sh file for each node in the cluster – (starting with the node you are performing the install from). Run the root.sh file ON ALL NODES in the RAC cluster ONE AT A TIME.

You will receive several warnings while running the root.sh script on all nodes. These warnings can be safely ignored.

The root.sh may take awhile to run.

Go back to the OUI and acknowledge the “Execute Configuration scripts” dialog window after running the root.sh script on both nodes.

End of installation At the end of the installation, exit from the OUI.

After successfully install Oracle 10g Clusterware (10.2.0.1), start the OUI for patching the clusteware with the latest patch available (10.2.0.5). We can refer back above step for the patching activity.

Verify Oracle Clusterware Installation
After the installation of Oracle Clusterware, we can run through several tests to verify the install was successful. Run the following commands on both nodes in the RAC Cluster

$ ./oracle/product/10.2.0/crs_1/bin/olsnodes
soladb1
soladb2


$ ./oracle/product/10.2.0/crs_1/bin/crs_stat –t
Name Type Target State Host
------------------------------------------------------------
ora....db1.gsd application ONLINE ONLINE soladb1
ora....db1.ons application ONLINE ONLINE soladb1
ora....db1.vip application ONLINE ONLINE soladb1
ora....db2.gsd application ONLINE ONLINE soladb2
ora....db2.ons application ONLINE ONLINE soladb2
ora....db2.vip application ONLINE ONLINE soladb2

9. Installing Oracle Database 10g Software
Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (soladb1). The Oracle Database software will be installed to both of the Oracle RAC nodes in the cluster by the OUI.

Using xstart or any xterm client, login as Oracle user and start the installation.

$ ./runInstaller.sh

Screen Name Response
Welcome Screen Click Next
Select Installation Type Select the Enterprise Edition option.
Specify Home Details Set the Name and Path for the ORACLE_HOME as follows:
Name: OraDb10g_home1
Path: /oracle/product/10.2.0/db_1
Specify Hardware Cluster Installation Mode Select the Cluster Installation option then select all nodes available. Click Select All to select all servers: soladb1 and soladb2.

If the installation stops here and the status of any of the RAC nodes is “Node not reachable”, perform the following checks:

Ensure Oracle Clusterware is running on the node in question. (crs_stat –t)
Ensure you are table to reach the node in question from the node you are performing the installation from.

Product-Specific Prerequisite Checks The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle database software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox.

If you did not run the OUI with the ignoreSysPrereqs option then the Kernel parameters prerequisite check will fail. This is because the OUI is looking at the /etc/system file to check the kernel parameters. As we discussed earlier, this file is not used by default in Solaris 10. This is documented in Metalink Note 363436.1.

Click Next to continue.

Select Database Configuration Select the option to “Install database software only.”

Remember that we will create the clustered database as a separate step using DBCA.

Summary Click on Install to start the installation!
Root Script Window – Run root.sh After the installation has completed, you will be prompted to run the root.sh script. It is important to keep in mind that the root.sh script will need to be run on all nodes in the RAC cluster one at a time starting with the node you are running the database installation from.

First, open a new console window on the node you are installing the Oracle 10g database software from as the root user account. For me, this was solaris1.

Navigate to the /oracle/product/10.2.0/db_1 directory and run root.sh.

After running the root.sh script on all nodes in the cluster, go back to the OUI and acknowledge the “Execute Configuration scripts” dialog window.

End of installation At the end of the installation, exit from the OUI.

After successfully install Oracle Database 10g (10.2.0.1), start the OUI for patching the database with the latest patch available (10.2.0.5). We can refer back above step for the patching activity.

Run the Network Configuration Assistant
To start NETCA, run the following:
$ netca

The following table walks you through the process of creating a new Oracle listener for our RAC environment.

Screen Name Response
Select the Type of Oracle
Net Services Configuration Select Cluster Configuration
Select the nodes to configure Select all of the nodes: soladb1 and soladb2.
Type of Configuration Select Listener configuration.
Listener Configuration – Next 6 Screens The following screens are now like any other normal listener configuration. You can simply accept the default parameters for the next six screens:
What do you want to do: Add
Listener name: LISTENER
Selected protocols: TCP
Port number: 1521
Configure another listener: No
Listener configuration complete! [ Next ]
You will be returned to this Welcome (Type of Configuration) Screen.
Type of Configuration Select Naming Methods configuration.
Naming Methods Configuration The following screens are:
Selected Naming Methods: Local Naming
Naming Methods configuration complete! [ Next ]
You will be returned to this Welcome (Type of Configuration) Screen.
Type of Configuration Click Finish to exit the NETCA.

The Oracle TNS listener process should now be running on all nodes in the RAC cluster.

$ crs_stat –t
Name Type Target State Host
------------------------------------------------------------
ora....B1.lsnr application ONLINE ONLINE soladb1
ora....db1.gsd application ONLINE ONLINE soladb1
ora....db1.ons application ONLINE ONLINE soladb1
ora....db1.vip application ONLINE ONLINE soladb1
ora....B2.lsnr application ONLINE ONLINE soladb2
ora....db2.gsd application ONLINE ONLINE soladb2
ora....db2.ons application ONLINE ONLINE soladb2
ora....db2.vip application ONLINE ONLINE soladb2

10. Create ASM instance and ASM diskgroup

To start the ASM instance creation process, run the following command on any nodes of the Oracle 10g RAC cluster as oracle user.

$ dbca

Screen Name Response
Welcome Screen Select “Oracle Real Application Clusters database.”
Operations Select Configure Automatic Storage Management
Node Selection Click on the Select All button to select all servers: soladb1 and soladb2.
Create ASM Instance Supply the SYS password to use for the new ASM instance.

Also, starting with Oracle 10g Release 2, the ASM instance server parameter file (SPFILE) needs to be on a shared disk. You will need to modify the default entry for “Create server parameter file (SPFILE)” to reside on the RAW partition as follows: /dev/rdsk/c2t7d0s2. All other options can stay at their defaults.

You will then be prompted with a dialog box asking if you want to create and start the ASM instance. Select the OK button to acknowledge this dialog.

The OUI will now create and start the ASM instance on all nodes in the RAC cluster.

ASM Disk Groups To start, click the Create New button. This will bring up the “Create Disk Group” window with the three of the partitions we created earlier. If you didn’t see any disk, click the Change Disk Discovery Path button and enter /dev/rdsk/*

For the first “Disk Group Name”, I used the string “DATA”. Select the first RAW partitions (in my case /dev/rdsk/c2t5d0s2) in the “Select Member Disks” window. Keep the “Redundancy” setting to “External”.

After verifying all values in this window are correct, click the [OK] button. This will present the “ASM Disk Group Creation” dialog. When the ASM Disk Group Creation process is finished, you will be returned to the “ASM Disk Groups” windows.

Click the Create New button again. For the second “Disk Group Name”, I used the string “ARCH”. Select the last RAW partition (/dev/rdsk/c2t6d0s2) in the “Select Member Disks” window. Keep the “Redundancy” setting to “External”.

After verifying all values in this window are correct, click the [OK] button. This will present the “ASM Disk Group Creation” dialog.

When the ASM Disk Group Creation process is finished, you will be returned to the “ASM Disk Groups” window with two disk groups created and selected.

End of ASM Instance creation Click the Finish button to complete the ASM instance creation.

The Oracle ASM instance process should now be running on all nodes in the RAC cluster.

$ crs_stat –t
Name Type Target State Host
------------------------------------------------------------
ora....SM1.asm application ONLINE ONLINE soladb1
ora....B1.lsnr application ONLINE ONLINE soladb1
ora....db1.gsd application ONLINE ONLINE soladb1
ora....db1.ons application ONLINE ONLINE soladb1
ora....db1.vip application ONLINE ONLINE soladb1
ora....SM2.asm application ONLINE ONLINE soladb2
ora....B2.lsnr application ONLINE ONLINE soladb2
ora....db2.gsd application ONLINE ONLINE soladb2
ora....db2.ons application ONLINE ONLINE soladb2
ora....db2.vip application ONLINE ONLINE soladb2

The last step is to create Oracle 10g Database using dbca.