Wednesday, August 11, 2010

Migrating from UFS Root File System to a ZFS Root File System (Without Zones)

Okay, say I have system with 2 disks.
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@1f,0/pci@1/scsi@8/sd@0,0
1. c1t1d0
/pci@1f,0/pci@1/scsi@8/sd@1,0


Disk 0 is formatted with UFS and is boot disk. I want to migrate UFS root file system to ZFS one (zpool will be on Disk 1).

ZFS root environment can be only created on pool consisting of slices (not whole disk).

So I partition disk 1 like below. This means that disk label must be SMI, not EFI.
partition> p
Current partition table (original):
Total disk cylinders available: 9770 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 1 - 9229 16.00GB (9229/0/0) 33556644
1 unassigned wu 0 0 (0/0/0) 0
2 backup wm 0 - 9769 16.94GB (9770/0/0) 35523720
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0


Then I create zpool named pool-0
# zpool create pool-0 c1t1d0s0

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool-0 106K 15.6G 18K /pool-0

# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool-0 15.9G 111K 15.9G 0% ONLINE -


The current boot environment (BE) is ufsBE (I named it like this) and the new one will be created (using –n).
Obviously, the zpool has to be crated earlier (-p supports creation of new BE on ZFS).

# lucreate -c ufsBE -n zfsBE -p pool-0

Analyzing system configuration.
No name for current boot environment.
Current boot environment is named .
Creating initial configuration for primary boot environment .
The device is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name PBE Boot Device .
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Creating file systems on boot environment .
Creating file system for in zone on .
Populating file systems on boot environment .
Checking selection integrity.
Integrity check OK.
Populating contents of mount point .
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment .
Creating compare database for file system .
Creating compare database for file system .
Creating compare database for file system .
Updating compare databases on boot environment .
Making boot environment bootable.
Creating boot_archive for /.alt.tmp.b-0fc.mnt
updating /.alt.tmp.b-0fc.mnt/platform/sun4u/boot_archive
15+0 records in
15+0 records out
Population of boot environment successful.
Creation of boot environment successful.


Now see status of BE-s.
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
ufsBE yes yes yes no -
zfsBE yes no no yes -


Check new ZFS file systems that have been created.
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool-0 4.06G 11.6G 92.5K /pool-0
pool-0/ROOT 1.56G 11.6G 18K /pool-0/ROOT
pool-0/ROOT/zfsBE 1.56G 11.6G 1.56G /
pool-0/dump 512M 12.1G 16K -
pool-0/swap 2.00G 13.6G 16K -


Let’s now activate newly created ZFS BE.
# luactivate zfsBE
A Live Upgrade Sync operation will be performed on startup of boot environment .

/usr/sbin/luactivate: /etc/lu/DelayUpdate/: cannot create


Okay, this is known issue. Fix follows.

For tcsh shell: setup environmental variable.
# setenv BOOT_MENU_FILE menu.lst


Try again:
# luactivate zfsBE

**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).
2. Change the boot device back to the original boot environment by typing:

setenv boot-device /pci@1f,0/pci@1/scsi@8/disk@0,0:a

3. Boot to the original boot environment by typing:
boot
**********************************************************************
Modifying boot archive service
Activation of boot environment successful.


Reboot (but read previous message to know what command to use)

# init 6

See console during boot ...
Sun Fire V120 (UltraSPARC-IIe 648MHz), No Keyboard
OpenBoot 4.0, 1024 MB memory installed, Serial #53828024.
Ethernet address 0:3:ba:35:59:b8, Host ID: 833559b8.
Executing last command: boot
Boot device: /pci@1f,0/pci@1/scsi@8/disk@1,0:a File and args:
SunOS Release 5.10 Version Generic_139555-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: counterstrike2
SUNW,eri0 : 100 Mbps full duplex link up
Configuring devices.
/dev/rdsk/c1t0d0s4 is clean
/dev/rdsk/c1t0d0s5 is clean
Reading ZFS config: done.
Mounting ZFS filesystems: (3/3)
NOTICE: setting nrnode to max value of 57843


New status of BEs follows.
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
ufsBE yes no no yes -
zfsBE yes yes yes no -


Both UFS/ZFS file systems are visible. This is it!

# df -h -F zfs
Filesystem size used avail capacity Mounted on
pool-0/ROOT/zfsBE 16G 1.6G 12G 12% /
pool-0 16G 97K 12G 1% /pool-0
pool-0/ROOT 16G 18K 12G 1% /pool-0/ROOT
pool-0/.0 16G 50M 12G 1% /pool-0/.0
pool-0/backup 16G 18K 12G 1% /pool-0/backup

# df -h -F ufs
Filesystem size used avail capacity Mounted on
/dev/dsk/c1t0d0s4 2.0G 130M 1.8G 7% /.0
/dev/dsk/c1t0d0s5 4.6G 1.6G 3.0G 35% /backup

Back to the main page

Mounting DVD ISO image

Mounting DVD ISO image
The lofiadm administers lofi, the loopback file driver.
lofi allows a file to be associated with a block device.
That file can then be accessed through the block device.
This is useful when the file contains an image of some filesystem (DVD image), because the block device can then be used with the normal system utilities for mounting, checking or repairing filesystems.


Example

Downlaod sol-10-u6-ga1-sparc-dvd.iso to /tmp
lofiadm -a /tmp/sol-10-u6-ga1-sparc-dvd.iso /dev/lofi/1 (assign device to file)
mount -F hsfs -o ro /dev/lofi/1 /mnt (mount device to /mnt)
cd /mnt/Solaris_10/Tools/ (go to desired dir)
./setup_install_server /export/jumpstart5.10u6 (do what you need to do, say install jumpstart server)
Back to the main page