Monday, August 24, 2009

Solaris Patches on Veritas Encapsulated Root disk

How to apply Solaris Patches on Encapsulated Root disk..
--------------------------------------------------------------------------------
Details:
Note: the following are used in the following procedures.

a) Root Disk - c1t0d0s2
b) Root Mirror Disk - c1t0d1s2



Procedure :

1) Make sure that you also have full system backup either on tape or other disk. The VRTSexplorer can also be run before installing the patch.

2) If the rootdisk is mirrored, break and remove the mirror and plexes from 'rootdg' disk group using following steps. In steps below, c1t0d1 device is the mirror disk.



#vxplex -g rootdg dis var-02

#vxplex -g rootdg dis usr-02

#vxplex -g rootdg dis swapvol-02

#vxplex -g rootdg dis rootvol-02

#vxplex -g rootdg dis opt-02



To remove these plexes as follows..


#vxedit -g rootdg rm var-02

#vxedit -g rootdg rm usr-02

#vxedit -g rootdg rm swapvol-02

#vxedit -g rootdg rm rootvol-02

#vxedit -g rootdg rm opt-02


Verify that the root disk is not mirrored from "vxprint -htg rootdg", and that the root-mirror disk and sub-sequent plexes/subdisks are removed from configuration.


3) Manually unencapsulate the root device and boot it on standard Solaris slices.


Boot the system now using CDROM (#boot cdrom -s)

a) mount /dev/dsk/c1t0d0s0 /a (rootdisk)
b) cp -p /a/etc/vfstab.prevm /a/etc/vfstab

Make a backup of /a/etc/system and then edit it:
c) cp -p /a/etc/system /a/etc/system.orig
d) vi /a/etc/system and remove the two VxVM lines "rootdev:/pseudo/vxio@0:0" & "set vxio:vol_rootdev_is_volume=1"


Note: You may put a "*" in front of the VXFS related entries as well in "/etc/system" file.

e) Now make sure Volume Manager does not start on the next boot.

# touch /a/etc/vx/reconfig.d/state.d/install-db

f) # umount /a

g) Reboot (Make sure here you boot without any error and can mount all the OS partitions like /opt, /var, /usr etc as per VFSTAB.




4) Install/upgrade Solaris patches as per procedure given by SUN.



5) After successful installation and once the system is rebooted in multi-user mode using root partitions/slices, undo changes made in step #3 to start Volume Manager.



#rm /etc/vx/reconfig.d/state.d/install-db

# vxiod set 10

# vxconfigd -m disable

# vxdctl enable


6) Note that one can also re-encapsulate the boot drive. The prtvtoc for root disk must be verified to make sure the "private region" (tag 14 and 15) do not exist. Note that you may have to get rid of the old rootdg by using the
following:


# vxdg destroy rootdg



7) Encapsulate the root disk(c1t0d0) using #vxdiskadm option 2 System will reboot 2 times in this process.

8) Once system is up, check #vxprint -htg rootdg (you should view the volumes for partitions like rootvol, opt, var etc..

9) #/etc/vx/bin/vxdisksetup -i c1t1d0 format=sliced (Initializing the disk for mirror)

10) #vxdiskadm - option 6 - to mirror the volumes on disk.
( in the above, select/enter source disk(root disk) first and then the destination (root-mirror)




11) Once the mirror is completed, you may check the reboot of the server using both the devices.

Reference: http://support.veritas.com/docs/317858

Friday, August 21, 2009

Amanda Backup - Restore / Recover

Some Useful Commands

amrecover  To recover files & directory inside the dataset
amrestore  To restore the entire diskset



Status of the Tape Library,
#/usr/local/sbin/mtx -f /dev/changer status

To know the last taken backup details for the server bsma02 & the file system /opt,
# amadmin Full find bsma02 /opt/bmc/Datastore/oradata/.zfs/snapshot/today/

To know the status of most recent backup run,
#/opt/sfw/sbin/amstatus Full

Move a Tape from one slot [10] to another slot [65]
#/usr/local/sbin/mtx -f /dev/changer transfer 10 65

To label & enable the tape [ILO612L1] in Slot [10]
# /opt/sfw/sbin/amlabel Full ILO612L1 slot 10

Recover using amrecover utility

amrecover is the interactive utility in Amanda to recover the files and directories.

Valid amrecover utility sub commands are:

add path1 ... - add to extraction list (shell wildcards)
addx path1 ... - add to extraction list (regular expressions)
cd directory - change cwd on virtual file system (shell wildcards)
cdx directory - change cwd on virtual file system (regular expressions)
clear - clear extraction list
delete path1 ... - delete from extraction list (shell wildcards)
deletex path1 ... - delete from extraction list (regular expressions)
extract - extract selected files from tapes
exit
help
history - show dump history of disk
list [filename] - show extraction list, optionally writing to file
lcd directory - change cwd on local file system
ls - list directory on virtual file system
lpwd - show cwd on local file system
mode - show the method used to extract SMB shares
pwd - show cwd on virtual file system
quit
listdisk [diskdevice] - list disks
setdate {YYYY-MM-DD|--MM-DD|---DD} - set date of look
setdisk diskname [mountpoint] - select disk on dump host
sethost host - select dump host
settape [host:][device|default] - select tape server and/or device
setmode smb|tar - select the method used to extract SMB shares





Steps need to be followed:

Note: Compare the given steps with below given example which would help you to understand easily.

1. Initiate the amrecover command where the restore has to be performed.
Better create directory called ‘restore’ and from there try to recover which would help you not to overwrite files if any.
2. Even you can set this restore directory by command “lcd directory” at amrecover prompt & verify via “lpwd” command.
3. You can set the host where the recover is needed by, “sethost hostname” command.
4. List the disk sets configured on the hosts via “listdisk” command
5. Set the disk via “diskset dir” command
6. Using add & delete command add the files & directories which need to be recovered.
7. You can see the required Tape detail to restore by “list” command after adding the files & directories.
Note the Tape which should be required to recover the data. If required tape is not in tape library, bring it from offsite & load.
8. Extract the data by “extract” command and quit.

Example:

bash-2.05# /opt/sfw/sbin/amrecover Full
AMRECOVER Version 2.5.0p2. Contacting server on amanda ...
220 amanda AMANDA index server (2.5.0p2) ready.
200 Access OK
Setting restore date to today (2009-08-20)
200 Working date set to 2009-08-20.
Scanning /opt/amanda/Full...
Scanning /data0/amanda/Full...
Scanning /data1/amanda/Full...
Scanning /data2/amanda/Full...
200 Config set to Full.
200 Dump host set to amanda.
Trying disk / ...
$CWD '/' is on disk '/' mounted at '/'.
200 Disk set to /.
/
amrecover> sethost bsma02
200 Dump host set to bsma02.
amrecover> setdate ---05
200 Working date set to 2009-08-05.

amrecover> listdisk
200- List of disk for host bsma02
201- /
201- /usr
201- /var
201- /opt
201- /export/home
201- /opt/bmc/Datastore/oradata/.zfs/snapshot/today/
200 List of disk for host bsma02
amrecover> setdisk /opt/bmc/Datastore/oradata/.zfs/snapshot/today/
200 Disk set to /opt/bmc/Datastore/oradata/.zfs/snapshot/today/.
amrecover> ls
2009-08-04 rod/
2009-08-04 BMCPDS/
2009-08-04 .
amrecover> cd BMCPDS
/opt/bmc/Datastore/oradata/.zfs/snapshot/today//BMCPDS
amrecover> ls
2009-08-04 users01.dbf
2009-08-04 undotbs01.dbf
2009-08-04 tools01.dbf
2009-08-04 temp01.dbf
2009-08-04 system01.dbf
2009-08-04 sysaux01.dbf
2009-08-04 redo3b.log
2009-08-04 redo3a.log
2009-08-04 redo2b.log
2009-08-04 redo2a.log
2009-08-04 redo1b.log
2009-08-04 redo1a.log
2009-08-04 TESTSPACE01.dbf
2009-08-04 PV02_01.dbf
2009-08-04 PV01_01.dbf
2009-08-04 PSDP01_01.dbf
2009-08-04 PE01_02.dbf
2009-08-04 PE01_01.dbf
2009-08-04 NSDP01_01.dbf
2009-08-04 INDEX04_01.dbf
2009-08-04 INDEX03_01.dbf
2009-08-04 INDEX02_01.dbf
2009-08-04 INDEX01_01.dbf
2009-08-04 DATA04_01.dbf
2009-08-04 DATA03_01.dbf
2009-08-04 DATA02_01.dbf
2009-08-04 DATA01_01.dbf
2009-08-04 CONTROL2.CTL
2009-08-04 CONTROL1.CTL
2009-08-04 ARTMPF.dbf
2009-08-04 ARSYS.dbf
2009-08-04 .
amrecover> add users01.dbf
Added /BMCPDS/users01.dbf
amrecover> list
TAPE ILO617L1:65 LEVEL 0 DATE 2009-08-04
/BMCPDS/users01.dbf
amrecover> extract

Extracting files using tape drive chg-zd-mtx on host amanda.
The following tapes are needed: ILO623L1

Restoring files into directory /bsma02/opt/bmc/Datastore/oradata/restore
Continue [?/Y/n]? y

Extracting files using tape drive chg-zd-mtx on host amanda.
Load tape ILO617L1 now
Continue [?/Y/n/s/t]? y
. users01.dbf
amrecover>quit
200 Good bye.
bash-2.05#



Restore using amrestore command

You can not recover the whole disk set using amrecover utility. You can only recover directories & files inside the disk set.

But using amrestore you can restore the entire disk set.

Use the amadmin command åRun by Amanda user¨to find out the Tape, Backup Level & file information to restore as below.

$ /opt/sfw/sbin/amadmin Full find bsma02 /var
Warning: no log files found for tape ILO626L1 written 2009-08-21
Scanning /opt/amanda/Full...
20090821000900: found Amanda directory.
Scanning /data0/amanda/Full...
20090821000900: found Amanda directory.
Scanning /data1/amanda/Full...
20090821000900: found Amanda directory.
Scanning /data2/amanda/Full...
20090821000900: found Amanda directory.
Warning: no log files found for tape ILO626L1 written 2009-08-21
Scanning /opt/amanda/Full...
20090821000900: found Amanda directory.
Scanning /data0/amanda/Full...
20090821000900: found Amanda directory.
Scanning /data1/amanda/Full...
20090821000900: found Amanda directory.
Scanning /data2/amanda/Full...
20090821000900: found Amanda directory.

date host disk lv tape or file file part status
2009-01-02 bsma02 /var 0 ILO624L1 16 -- OK
2009-02-04 bsma02 /var 1 ILO606L1 18 -- OK
2009-07-25 bsma02 /var 1 ILO600L1 7 -- OK
2009-07-26 bsma02 /var 1 ILO600L1 31 -- OK
2009-07-27 bsma02 /var 0 ILO600L1 118 -- OK
2009-07-29 bsma02 /var 1 ILO622L1 2 -- OK
2009-07-31 bsma02 /var 1 ILO602L1 10 -- OK
2009-08-01 bsma02 /var 1 ILO602L1 123 -- OK
2009-08-02 bsma02 /var 0 ILO607L1 11 -- OK
2009-08-03 bsma02 /var 1 ILO603L1 57 -- OK
2009-08-04 bsma02 /var 1 ILO617L1 10 -- OK
2009-08-05 bsma02 /var 1 ILO615L1 4 -- OK
2009-08-07 bsma02 /var 1 ILO625L1 40 -- OK
2009-08-08 bsma02 /var 0 ILO620L1 16 -- OK
2009-08-09 bsma02 /var 1 ILO613L1 11 -- OK
2009-08-10 bsma02 /var 1 ILO612L1 11 -- OK
2009-08-11 bsma02 /var 1 ILO611L1 4 -- OK
2009-08-12 bsma02 /var 1 ILO616L1 4 -- OK
2009-08-13 bsma02 /var 1 ILO601L1 2 -- OK
2009-08-14 bsma02 /var 1 ILO619L1 59 -- OK
2009-08-15 bsma02 /var 0 ILO623L1 42 -- OK
2009-08-16 bsma02 /var 1 ILO621L1 10 -- OK
2009-08-17 bsma02 /var 1 ILO605L1 5 -- OK
2009-08-18 bsma02 /var 1 ILO618L1 5 -- OK
2009-08-19 bsma02 /var 1 ILO609L1 13 -- OK
2009-08-20 bsma02 /var 1 ILO608L1 4 -- OK
$

Restore the disk set using amrestore command.
You can mount the remote host if any using NFS and try restore.

# /opt/sfw/sbin/amrestore –f42 ILO623L1 bsma02 /var

The above command would create the file bsma02._var.20090815.0 in the current directory.

Using tar command, extract the data in the destination directory.

#tar xvBf –

Your data is restored now!!

uname & showrev shows different patch levels - Why?


The uname(1M) command queries the running kernel to find the patch revision. The showrev(1M) command reads through the patch logs. What happens is that the patch logs are updated with the new patch, but the in-memory image of the kernel is not updated.


When the system is booted, the revision string that uname uses is loaded with the unix module, which can be found in the various platform directories depending on the architecture.

For Example:

32 bit SPARC - /platform/sun4u/kernel/unix
64 bit SPARC - /platform/sun4u/kernel/sparcv9/unix
intel - /platform/i86pc/kernel/unix
The following identifies the architecture for your machine:

$ uname -m
VERIFY THE FOLLOWING
1. Was the system rebooted after installing the patch, or patches?

When you reboot the machine after applying the patch, or patches, be sure to run the init 6 command. Do not simply exit single-user mode and proceed up to multi-user mode. When the run level is changed from single-user to multi-user in that manner, the kernel module is not reloaded; only the run state is changed. The init 6 command performs a complete reboot and reload the kernel.

2. Does this machine use Volume Manager or Solstice Disksuite[TM]?

If so, your issue could be caused from a failing/out-of sync-mirror (stale mirror).

Physically disconnect one side of the mirror. The volume management system looks after the failure, but it will stop alternating reads with a possible bad mirror.

3. Review what the output from the following command produces:

$ strings /platform/`uname -m`/kernel/sparcv9/unix | grep Generic
The command should show a result that is similar to the following:

Generic_117350-05
4. Check what the kernel binary shows:

$ /usr/ccs/bin/what /platform/`uname -m`/kernel/sparcv9/unix
The kernel binary should show a result similar to the following:

SunOS 5.8 Generic 117350-05 Jun 2004
5. Review what the output from the following command produces:

Note: For the Solaris[TM] 8 Operating System (Solaris 8 OS) and above, use mdb. For earlier OS versions, use adb. Be sure to run this command as root:

# echo "$ utsname:
utsname: sys SunOS
utsname+0x101: node maniac
utsname+0x202: release 5.8
utsname+0x303: version Generic_117350-05
utsname+0x404: machine sun4u
6. Was the patch or patch cluster installed in single user mode?

If not, then the system could be corrupted and it might have to be restored from backup or reinstalled.

As a last resort, before having to re-install the OS, try backing out the patch and re-applying it while in single user mode.

If for some reason the patch does not backout, try re-adding the patch with the -u option.

# patchadd -u
Re-adding the patch with the -u option forces the patch to be re-applied even though the system believes that it has already been applied and might also solve the problem. This will force it to overwrite the patch that was just installed and might solve the problem.

This problem will also be seen with Solaris[TM] 2.6 patched with KJP 105181-38 running on an E10000 domain. The E10000 kenel binary in the patch contains the wrong version string :
# strings ./SUNWcar.u1/reloc/platform/SUNW,Ultra-Enterprise-10000/kernel/unix | grep 105181
Generic_105181-37

Friday, August 7, 2009

Saving and Restoring ZFS Data

Saving and Restoring ZFS Data
The zfs send command creates a stream representation of a snapshot that is written to standard output. By default, a full stream is generated. You can redirect the output to a file or to a different system. The zfs receive command creates a snapshot whose contents are specified in the stream that is provided on standard input. If a full stream is received, a new file system is created as well. You can save ZFS snapshot data and restore ZFS snapshot data and file systems with these commands. See the examples in the next section.

The following solutions for saving ZFS data are provided:

Saving ZFS snapshots and rolling back snapshots, if necessary.

Saving full and incremental copies of ZFS snapshots and restoring the snapshots and file systems, if necessary.

Remotely replicating ZFS file systems by saving and restoring ZFS snapshots and file systems.

Saving ZFS data with archive utilities such as tar and cpio or third-party backup products.

Consider the following when choosing a solution for saving ZFS data:

File system snapshots and rolling back snapshots – Use the zfs snapshot and zfs rollback commands if you want to easily create a copy of a file system and revert back to a previous file system version, if necessary. For example, if you want to restore a file or files from a previous version of a file system, you could use this solution.

For more information about creating and rolling back to a snapshot, see ZFS Snapshots.

Saving snapshots – Use the zfs send and zfs receive commands to save and restore a ZFS snapshot. You can save incremental changes between snapshots, but you cannot restore files individually. You must restore the entire file system snapshot.

Remote replication – Use the zfs send and zfs receive commands when you want to copy a file system from one system to another. This process is different from a traditional volume management product that might mirror devices across a WAN. No special configuration or hardware is required. The advantage of replicating a ZFS file system is that you can re-create a file system on a storage pool on another system, and specify different levels of configuration for the newly created pool, such as RAID-Z, but with identical file system data.

Saving ZFS Data With Other Backup Products
In addition to the zfs send and zfs receive commands, you can also use archive utilities, such as the tar and cpio commands, to save ZFS files. All of these utilities save and restore ZFS file attributes and ACLs. Check the appropriate options for both the tar and cpio commands.

For update-to-date information about issues with ZFS and third-party backup products, please see the Solaris Express release notes.

Saving a ZFS Snapshot
The simplest form of the zfs send command is to save a copy of a snapshot. For example:

# zfs send tank/dana@040706 > /dev/rmt/0

You can save incremental data by using the zfs send i option. For example:

# zfs send -i tank/dana@040706 tank/dana@040806 > /dev/rmt/0

Note that the first argument is the earlier snapshot and the second argument is the later snapshot.

If you need to store many copies, you might consider compressing a ZFS snapshot stream representation with the gzip command. For example:

# zfs send pool/fs@snap | gzip > backupfile.gz

Restoring a ZFS Snapshot
When you restore a file system snapshot, the file system is restored as well. The file system is unmounted and is inaccessible while it is being restored. In addition, the original file system to be restored must not exist while it is being restored. If a conflicting file system name exists, zfs rename can be used to rename the file system. For example:

# zfs send tank/gozer@040706 > /dev/rmt/0

.
.
.
# zfs receive tank/gozer2@today < /dev/rmt/0
# zfs rename tank/gozer tank/gozer.old
# zfs rename tank/gozer2 tank/gozer

You can use zfs recv as an alias for the zfs receive command.

When you restore an incremental file system snapshot, the most recent snapshot must first be rolled back. In addition, the destination file system must exist. In the following example, the previous incremental saved copy of tank/dana is restored.

# zfs rollback tank/dana@040706
cannot rollback to 'tank/dana@040706': more recent snapshots exist
use '-r' to force deletion of the following snapshots:
tank/dana@now
# zfs rollback -r tank/dana@040706/
# zfs recv tank/dana < /dev/rmt/0

During the incremental restore process, the file system is unmounted and cannot be accessed.

Remote Replication of ZFS Data
You can use the zfs send and zfs recv commands to remotely copy a snapshot stream representation from one system to another system. For example:

# zfs send tank/cindy@today | ssh newsys zfs recv sandbox/restfs@today

This command saves the tank/cindy@today snapshot data and restores it into the sandbox/restfs file system and also creates a restfs@today snapshot on the newsys system. In this example, the user has been configured to use ssh on the remote system.


Courtesy: http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch06s03.html#gbimy

Backing up ZFS file systems

Original post.

This is one of the good things ZFS has brought us. Backing up a file system is a ubiquitous problem, even in your home PC, if you're wise and care about your data. As many things in ZFS, due to the telescoping nature of this file system (using words of ZFS' father, Jeff Bonwick), backing up is tightly connected to other ZFS' concepts: in this case, snapshots and clones.

Snapshotting
ZFS lets the administrator perform inexpensive snapshots of a mounted filesystem. Snapshots are just what their name implies: a photo of a ZFS file system in a given point in time. Since that moment, the file system from which the snapshot was generated and the snapshot itself begin to branch and the space required by the snapshot will roughly be the space occupied by the differences between these two entities. If you delete a 1 GB file from a snapshotted filesystem, for example, the space accounted for that file will go in charge of the snapshot which, obviously, must keep track of it because that file existed when the snapshot was created. So far, so good (and easy). Creating snapshot is also incredibly easy: provided that you have a role with the required privileges you just issued the following command:


$ pfexec zfs snapshot zpool-name/filesystem-name@snapshot-name


Now you have a photo of the zpool-name/filesystem-name ZFS file system in a given point in time. You can check about its existence by issuing:


$ zfs list -t snapshot


which in this moment, in my machines, gives me:


$ zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
rpool/export/home/enrico@20081231 71.3M - 14.9G -
[...]


This means that the ZFS file system which hosts my home directory has been snapshotted and there's a snapshot named 20081231.

Cloning
Cloning is pretty much like snapshotting with the difference that the result of the operation is another ZFS file system, obviously mounted in another mount point, which can be used like whichever file system. Like snapshots, the clone and the originating file system will begin to diverge and differences will begin to occupy space in the clone. The official ZFS administration documentation has detailed and complete information about this topic.

Backing up
This isn't really how documentation calls it: they just refer to it with ZFS send and receive operations. As seen, we've got a mean to snapshot a file system: there's no need to unmount a file system or run the risk of getting a set of inconsistent data because a modification occurred during the operation. This alone is worth switching to ZFS, in my opinion. Now there's more: a snapshot can be dumped (serialized) to a file with a simple command:


$ pfexec zfs send zpool-name/filesystem-name@snapshot-name > dump-file-name


This file contains the entire ZFS file system: files and all the rest of metadata. Everything. The good thing is that you can receive a ZFS file system just doing:


$ pfexec zfs receive another-zpool-name/another-filesystem-name <>


This operation creates the another-filesystem-name on pool another-zpool-name (it can even be the same zpool you generated the dump from) and a snapshot called snapshot-name will also be created. In the case of full dumps, the destination file system must not exist and will be created for you. Easy. Full back up with just two lines, a bit of patience and sufficient disk space.

There are the usual variations on the theme. You don't really need store the dump in a file, you could just pipe send into receive and do it in just one line with no need of extra storage for the dump file:


# zfs send zpool-name/filesystem-name@snapshot-name | zfs receive another-zpool-name/another-filesystem-name


And if you want to send it to another machine, no problems at all:


# zfs send zpool-name/filesystem-name@snapshot-name | ssh anothermachine zfs receive another-zpool-name/another-filesystem-name


Incredibly simple. ZFS is really revolutionary.

Incremental backups
ZFS, obviously, lets you do incremental send and receive with the -i option which lets you send the differences between one snapshot and another. These differences will be loaded and applied at the receiver side: in this case, obviously, the source snapshot must already exist. You start with a full send and then you go on with increments. It's the way I'm backing up our machines and it's fast, economic and reliable. A great reason to switch to ZFS, let alone Solaris.

Courtesy: http://thegreyblog.blogspot.com/2009/01/backing-up-zfs-file-systems.html