Thursday, November 11, 2010

Recovering a System to a Different Machine Using ufsrestore and the Solaris 9 or 10 OS

Here is a procedure for recovering a failed server to another server on a different platform. This failed server is running the Solaris 9 or 10 Operating System for SPARC platforms and Solaris Volume Manager. The procedure could be modified to work for a system that runs the Solaris OS for x86 platforms.




Scenario: A couple of servers share one tape drive connecting to server Tapehost. The servers are backed up to tapes using ufsdump. One old server, Myhost, which runs Solaris Volume Manager, fails -- and you want to restore Myhost to a new server on a different platform.



Part A: Restore From Remote Tape to the New Machine

If there is more than one ufsdump image on a tape, you must write down which image is for which file system backup of Myhost right after the backup occurs.



Here, I assume that the root file system's full backup of Myhost is the third image on the tape.



1. Position the tape in Tapehost (10.1.1.47) for the root file system's full backup image of Myhost:



root@Tapehost# mt -f /dev/rmt/0n fsf 3

2. On Myhost (10.1.1.46), boot into single-user mode from a CD-ROM of the same OS version:



ok boot cdrom -s

3. Enable the network port, for example, bge0:



# ifconfig bge0 10.1.1.46 up

4. Using the format command, prepare partitions for file systems. The basic procedure is to format the disk, select the disk, create partitions, and label the disk.



5. Create a new root file system on a partition, for example, /dev/rdsk/c1t0d0s0, and mount it to /mnt:



# newfs /dev/rdsk/c1t0d0s0

# mount /dev/dsk/c1t0d0s0 /mnt

6. Restore the full backup of the root file system from tape:



# cd /mnt

# ufsrestore rf 10.1.1.47:/dev/rmt/0n

7. If you want to restore the incremental backup, re-position the remote tape and use the ufsrestore command again. After restoring, remove the restoresymtable file.



# rm restoresymtable

8. Install boot block in the root disk:



# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk

/dev/rdsk/c1t0d0s0

9. Unmount the root file system:



# cd /

# umount /mnt

10. Repeat steps 1, 5, 6, 7, and 9 to restore other file systems.



11. Mount the root file system, /dev/dsk/c1t0d0s0 to /mnt, and edit /mnt/etc/vfstab so that each mount point mounts in the correct partition.



For example, change the following line from this:



/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no -

To this:



/dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 / ufs 1 no -

Part B: Remove Solaris Volume Manager Information

Use the procedure below to remove the Solaris Volume Manager information.



Note: Another way to clear out Solaris Volume Manager, is to reboot into single-user mode and use metaclear, metadb -d. But with the Solaris 10 OS, the mdmonitor service will complain when the system first reboots. However, the complaints will be gone after the Solaris Volume Manager information is cleared out.



1. If Myhost had a mirrored root file system, there is an entry similar to rootdev:/pseudo/md@0:0,0,blk in the /etc/system file. After performing the procedure in Part A, remove this entry from /mnt/etc/system. Do not just comment it out.



2. All of the Solaris Volume Manager information is stored in three files: /kernel/drv/md.conf, /etc/lvm/mddb.cf, and /etc/lvm/md.cf. So to clear out Solaris Volume Manager, overwrite these files with the files from a system without Solaris Volume Manager.



Note: If you intend to configure the meta devices the same way they were, configuration information is in the /etc/lvm/md.cf file. So take notes before this file is overwritten.



# cp /kernel/drv/md.conf /mnt/kernel/drv/md.conf

# cp /etc/lvm/mddb.cf /mnt/etc/lvm/mddb.cf

# cp /etc/lvm/md.cf /mnt/etc/lvm/md.cf



Part C: Reconfigure /devices, /dev, and /etc/path_to_inst

1. Because the new server has different hardware than the old server, the device trees will change too. Update the /etc/path_to_inst file to reflect this change.



# rm -r /mnt/dev/*

# rm -r /mnt/devices/*

# devfsadm -C -r /mnt -p /mnt/etc/path_to_inst

2. Reboot the system from the root disk:



# init 6

If it does not reboot, you can use setenv boot-device from OpenBoot PROM or eeprom boot-device from the OS to set up the root disk as boot disk.

No comments: