Port Number | Description |
---|---|
1 | TCP Port Service Multiplexer (TCPMUX) |
5 | Remote Job Entry (RJE) |
7 | ECHO |
18 | Message Send Protocol (MSP) |
20 | FTP -- Data |
21 | FTP -- Control |
22 | SSH Remote Login Protocol |
23 | Telnet |
25 | Simple Mail Transfer Protocol (SMTP) |
29 | MSG ICP |
37 | Time |
42 | Host Name Server (Nameserv) |
43 | WhoIs |
49 | Login Host Protocol (Login) |
53 | Domain Name System (DNS) |
69 | Trivial File Transfer Protocol (TFTP) |
70 | Gopher Services |
79 | Finger |
80 | HTTP |
103 | X.400 Standard |
108 | SNA Gateway Access Server |
109 | POP2 |
110 | POP3 |
115 | Simple File Transfer Protocol (SFTP) |
118 | SQL Services |
119 | Newsgroup (NNTP) |
137 | NetBIOS Name Service |
139 | NetBIOS Datagram Service |
143 | Interim Mail Access Protocol (IMAP) |
150 | NetBIOS Session Service |
156 | SQL Server |
161 | SNMP |
179 | Border Gateway Protocol (BGP) |
190 | Gateway Access Control Protocol (GACP) |
194 | Internet Relay Chat (IRC) |
197 | Directory Location Service (DLS) |
389 | Lightweight Directory Access Protocol (LDAP) |
396 | Novell Netware over IP |
443 | HTTPS |
444 | Simple Network Paging Protocol (SNPP) |
445 | Microsoft-DS |
458 | Apple QuickTime |
546 | DHCP Client |
547 | DHCP Server |
563 | SNEWS |
569 | MSN |
1080 | Socks |
Saturday, June 1, 2013
Unix well known port numbers
Friday, November 4, 2011
Solaris 10 Zones and Networking -- Common Considerations
By stw on Feb 24, 2010
As often happens, a customer question resulted in this write-up. The customer had to quickly consider how they deploy a large number of zones on an M8000. They would be configuring up to twelve separate links for the different networks, and double that for IPMP. I wrote up the following. Thanks to Penny Cotten, Jim Eggers, Gordon Lythgoe, Peter Memishian, and Erik Nordmark for the feedback as I was preparing this. Also, you may see some of this in future documentation.
Definitions
Datalink: An interface at Layer 2 of the OSI protocol stack, which is represented in a system as a STREAMS DLPI (v2) interface. Such an interface can be plumbed under protocol stacks such as TCP/IP. In the context of Solaris 10 Zones, datalinks are physical interfaces (e.g. e1000g0, bge1), aggregations (aggr3), or VLAN-tagged interfaces (e1000g111000 (VLAN tag 111 on e1000g0), bge111001, aggr111003). A datalink may also be referred to as a physical interface, such as when referring to a Network Interface Card (NIC). The datalink is the 'physical' property configured with the zone configuration tool zonecfg(1M).
Non-global Zone: A non-global zone is any zone, whether native or branded, that is configured, installed, and managed using the zonecfg(1M) and zoneadm(1M) commands in Solaris 10. A branded zone may be either Solaris 8 or Solaris 9.
Zone network configuration: shared versus exclusive IP Instances
Since Solaris 10 8/07, zone configurations can be either in the default shared IP Instance or exclusive IP Instance configuration.
When configured as shared, zone networking includes the following characteristics.
All datalink and IP, TCP, UDP, SCTP, IPsec, etc. configuration is done in the global zone.
All zones share the network configuration settings, including datalink, IP, TCP, UDP, etc. This includes ndd(1M) settings.
All IP addresses, netmasks, and routes are set by the global zone and can not be altered in a non-global zone.
Non-global zones can not utilize DHCP (neither client nor server). There is a work-around that may allow a zone to be a DHCP server.
By default a privileged user in a non-global zone can not put a datalink into promiscuous mode, and thus can not run things like snoop(1M). Changing this requires adding the priv_net_raw privilege to the zone from the global zone, and also requires identifying which interface(s) to allow promiscuous mode on via the 'match' zonecfg parameter. Warning: This allows the non-global zone to send arbitraty packets on those interfaces.
IPMP configuration is managed in the global zone and applies to all zones using the datalinks in the IPMP group. All non-global zones configured with one datalink can or must use all datalinks in the IPMP group. Non-global zones can use multiple IPMP groups. The zone must be configured with only one datalink from each IPMP group.
Only default routes apply to the non-global zones, as determined by the IP address(es) assigned to the zone. Non-default static routes are not supported to direct traffic leaving a non-global zone.
Multiple zones can share a datalink.
When configured as exclusive, zone networking includes the following characteristics.
All network configuration can be done within the non-global zone (and can also be done indirectly from the global zone (via zlogin(1) or editing the files in the non-global zone's root file system).
IP and above configurations can not be seen directly within the global zone (e.g. running ifconfig(1M) in the global zone will not show the details of a non-global zone).
he non-global zone's interface(s) can be configured via DHCP, and the zone can be a DHCP server.
A privileged user in the non-global zone can fully manipulate IP address, netmask, routes, ndd variables, logical interfaces, ARP cache, IPsec policy and keys, IP Filter, etc.
A privileged user in the non-global zone can put the assigned interface(s) into promiscuous mode (e.g. can run snoop).
The non-global zone can have unique IPsec properties.
IPMP must be managed within the non-global zone.
A datalink can only be used by a single running zone at any one time.
Commands such as snoop(1M) and dladm(1M) can be used on datalinks in use by running zones.
It is possible to mix shared and exclusive IP zones on a system. All shared zones will be sharing the configuration and run time data (routes, ARP, IPsec) of the global zone. Each exclusive zone will have its own configuration and run time data, which can not be shared with the global zone or any other exclusive zones.
IP Multipathing (IPMP)
By default, all IPMP configurations are managed in the global zone and affects all non-global zones whose network configuration includes even one datalink (the net->physical property in zonecfg(1M)) in the IPMP group. A zone configured with a datalinks that are part of IPMP groups must only configure each IP address on only one of the datalinks in the IPMP group. It is not necessary to configure an IP address on each datalink in the group. The global zone's IPMP infrastructure will manage the fail-over and fail-back of datalinks on behalf of all the shared IP non-global zones.
For exclusive IP zones, the IPMP configuration for a zone must be managed from within the non-global zone, either via the configuration files or zlogin(1).
The choice to use probe-based failure detection or link-based failure detection can be done on a per-IPMP group basis, and does not affect whether the zone can be configured as shared or exclusive IP Instance. Care must be taken when selecting test IP addresses, since they will be configured in the global zone and thus may affect routing for either the global or for the non-global zones.
Routing and Zones
The normal case for shared-IP zones is that they use the same datalinks and the same IP subnet prefixes as the global zone. In that case the routing in the shared-IP zones are the same as in the global zone. The global zone can use static or dynamic routing to populate its routing table, that will be used by all the shared-IP zones.
In some cases different zones need different IP routing. The best approach to accomplish this is to make those zones be exclusive-IP zones. If this is not possible, then one can use some limited support for routing differentiation across shared-IP zones. This limited support only handles static default routes, and only works reliably when the shared-IP zones use disjoint IP subnets.
All routing is managed by zone that owns the IP Instance. The global zones owns the 'default' IP Instance that all shared IP zones use. Any exclusive IP zone manages the routes for just that zone. Different routing policies, routing daemons, and configurations can be used in each IP Instance.
For shared IP zones, only default static routes are supported with those zones. If multiple default routes apply to a non-global zone, care must be taken that all the default routes are able to reach all the destinations that the zone need to reach. A round robin policy is used when multiple default routes are available and a new route needs to be determined.
The zonecfg(1M) 'defrouter' property can be used to define a default router for a specific shared IP zone. When a zone is started and the parameter is set, a default route on the interface configured for that zone will be created if it does not already exist. As of Solaris 10 10/09, when a zone stops, the default route is not deleted.
Default routes on the same datalink and IP subnet are shared across non-global zones. If a non-global zone is on the same datalink and subnet as the global zone, default route(s) configured for one zone will apply for all other zones on that datalink and IP subnet.
Inter-zone network traffic isolation
There are several ways to restrict network traffic between non-global shared IP zones.
The /dev/ip ndd(1M) paramter 'ip_restrict_interzone_loopback', managed from the global zone, will force traffic out of the system on a datalink if the source and destination zones do not share a datalink. The default configuration for this is to allow inter-zone networking using internal loopback of IP datagrams, with the value of this parameter set to '0'. When the value is set to '1', traffic to an IP address in another zone in the shared IP Instance that is not on the same datalink will be put onto the external network. Whether the destination is reached will depend on the full network configuration of the system and the external network. This applies whether the source and destination IP address are on the same or different IP subnets. This parameter applies to all IP Instances active on the system, including exclusive IP Instance zones. In the case of exclusive IP zones, this will apply only if the zone has more than one datalink configured with IP addresses. The for two zones on the same system to communicate with the 'ip_restrict_interzone_loopback' set to '1' requires the following conditions.
There is a network path to the destination. If on the same subnet, the switch(es) must allow the connection. If on different subnets, routes must be in place for packets to pass reliably between the two zones.
The destination address is not on the same datalink (as this would break the datalink rules).
The destination is not on datalink in an IPMP group that the sending datalink is also in.
The 'ip_restrict_interzone_loopback' parameter is available in Solaris 10 8/07 and later.
A route(1M) action to prevent traffic between two IP addresses is available. Using the '-reject' flag will generate an ICMP unreachable when this route is attempted. The '-blackhole' flag will silently discard datagrams.
The IP Filter action 'intercept_loopback' will filter traffic between sockets on a system, including traffic between zones and loopback traffic within a zone. Using this action prevents traffic between shared IP zones. It does not force traffic out of the system using a datalink. More information is in the ipf.conf(4) or ipf(4) manual page.
Aggregations
Solaris 10 1/06 and later support IEEE 802.3ad link aggregations using the dladm(1M) datalink administration command. Combining two or more datalinks into an aggregation effectively reduces the number of datalinks available. Thus it is important to consider the trade-offs between aggregations and IPMP when requiring either network availability or increased network bandwidth. Full traffic patterns must be understood as part of the decision making process.
For the 'ce' NIC, Sun Trunking 1.3.1 is available for Solaris 10.
Some considerations when making a decision between link aggregation and IPMP are the following.
Link aggregation requires support and configuration of aggregations on both ends of the link, i.e. both the system and the switch.
Most switches only support link aggregation within a switch, not spanning two or more switches.
Traffic between a single pair of IP addresses will typically only utilize one link in either an aggregation or IPMP group.
Link aggregation only provides availability between the switch ports and the system. IPMP using probe-based failure detection can redirect traffic around internal switch problems or network issues behind the switches.
Multiple hashing policies are available, and they can be set differently for inbound and outbound traffic.
IPMP probe-based failure detection required test addresses for each datalink in the IPMP group, which are in addition to the application or data address(es).
IPMP link-based failure detection will cause a fail-over or fail-back based on link state only. Solaris 10 supports IPMP configured in only link-based mode. If IPMP is configured in probe-based failure detection, link failure will also cause fail-over, and a link restore will cause a fail-back.
A physical interface can be in only one aggregation. VLANs can be configured over an aggregation.
A datalink can be in only one IPMP group.
An IPMP group can use aggregations as the underlying datalinks.
Note, this is for Solaris 10. OpenSolaris has differences. Maybe something for another day.
Courtesy: http://blogs.oracle.com/stw/entry/solaris_zones_and_networking_common
Friday, October 28, 2011
du & bdf or df output differs
Obviously there is a difference in how du and bdf behave.
This may occur if we touch open files.
"du" shows output in a positive view: it shows the number of currently allocated blocks and counts the blocks you've just deleted as free.
"bdf" has a more negative perspective: it shows the free disk space available.
The difference is here: if a still-active process has allocated blocks (such as
for a logfile that you've just deleted), "bdf" counts these as still occupied.
This won't change until the process closes the file ("deallocates the blocks")
as it usually happens when the process terminates.
If you still want to know which process holds space, this tool can be helpful. Important, this is an open source tool and it is not supported by Hewlett Packard.
Lsof Examples
Below you will find a set of examples using the lsof tool.
Examples
To list all open files, use:
# lsof
To list all open Internet, x.25 (HP-UX), and UNIX domain files, use:
# lsof -i -U
To list all open IPv4 network files in use by the process whose PID is 1234, use:
# lsof -i 4 -a -p 1234
Presuming the UNIX dialect supports IPv6, to list only open IPv6 network files, use:
# lsof -i 6
To list all processes using a particular network port, use:
# lsof -i :
In our case, this will be the best options:
When you need to dismount file systems on an HP-UX based server, you frequently find users 'on' a particular disk or logical volume resulting in 'device busy' errors. You can identify which processes have open files on a given device (instead of using intuition and frantic 'phone calls!) by using the fuser(1M) (10.20, 11.x) command.
fuser will list the process ids and usernames of processes that have a given file open and can even be used to automatically kill the identified processes. For example,
# fuser -u /mydir # All processes with
# /mydir open
# fuser -ku /dev/dsk/c0t6d0 # Kill all processes
# with files open on
# certain disk
Please see the man pages for additional options.
There is also a public domain tool called lsof that can be pulled from the internet and built on HP-UX. It shows all the files open by all the processes on the system, so use it in conjunction with grep if you are looking for a particular directory on a particular disk. For example,
# lsof | grep /mydisk
will show all processes with open files on the /mydisk file system.
To get lsof proceed as follows:
Anon ftp to vic.cc.purdue.edu
cd pub/tools/unix/lsof
Get lsof.tar.Z
uncompress lsof.tar.Z
tar -xvf lsof.tar
Read README.FIRST for instructions on how to build lsof.
If you cannot access to get the lsof script, it is attached anyway
The main situation here is: It is a know and expected situation that bdf and du will display different information.
Tuesday, June 28, 2011
xscf commands
When you are connected to the XSCF console, you will be prompted for a login ID. The default ID is “default” and there is no password. With this ID you will need to create a new administrative ID. You also need to be standing close to the server for this process as you will be prompted to change the panel mode switch. If you do not create a new logon ID whenever you connect to the console or when the console session times out, you will be prompted to change the panel mode switch.
login: default
Change the panel mode switch to Locked and press return…
Leave it in that position for at least 5 seconds. Change the panel mode switch
to Service, and press return…
Check the version of XSCF.
XSCF> version -c xcp
XSCF#0 (Active )
XCP0 (Current): 1090
XCP1 (Reserve): 1090
Create a user andrew
XSCF> adduser andrew
XSCF> password
password: Permission denied
Change the password for andrew
XSCF> password andrew
New XSCF password:
Retype new XSCF password:
Grant andrew the following privileges, useradm, platadm, aplatop.
XSCF> setprivileges andrew useradm platadm platop
Here is a list of all available privileges.
domainop@n
• Can refer to the status of any hardware mounted in a domain_n.
• Can refer to the status of any part of a domain_n.
• Can refer to the information of all system boards mounted.
domainmgr@n
• Can power on, power off, and reboot a domain_n.
• Can refer to the status of any hardware mounted in a domain_n.
• Can refer to the status of any part of a domain_n.
• Can refer to the information of all system boards mounted.
platop
• Can refer to the status of any part of the entire server but cannot change it.
platadm
• Control of the entire system
• Can operate all hardware in the system.
• Can configure all XSCF settings except the useradm and auditadm privilege settings.
• Can add and delete hardware in a domain.
• Can do the power operation of a domain.
• Can refer to the status of any part of the entire server.
useradm
• Can create, delete, invalidate, and validate user accounts.
• Can change user passwords and password profiles.
• Can change user privileges.
auditop
• Can refer to the XSCF access monitoring status and monitoring methods.
auditadm
• Can monitor and control XSCF access.
• Can delete an XSCF access monitoring method.
fieldeng
• Allows field engineers to perform the maintenance tasks or change the server configuration.
None
• When the local privilege for a user is set to none, that user has no privileges, even if the privileges
for that user are defined in LDAP.
• Setting a user’s privilege to none prevents the user’s privileges from being looked up in LDAP.
XSCF firmware has two networks for internal communication. The Domain to Service Processor Communications Protocol (DSCP) network provides an internal communication link between the Service Processor and the Solaris domains. The Inter-SCF Network (ISN) provides an internal communication link between the two Service Processors in a high-end server.
Configure DSCP with an IP address using the setdscp command.
XSCF> setdscp
DSCP network [0.0.0.0 ] > 10.1.1.0
DSCP netmask [255.0.0.0 ] > 255.255.255.0
XSCF address [10.1.1.1 ] >
Domain #00 address [10.1.1.2 ] >
Commit these changes to the database? [y|n] : y
Configure the XSCF interface with an IP address, this will be the adress you connect to via telnet to manage the console.
XSCF> setnetwork xscf#0-lan#0 -m 255.255.0.0. 162.10.10.11
Enable the XSCF interface you just configured with an IP address of 162.10.10.11
XSCF> setnetwork -c up lan#0
Confiure the default route
XSCF> setroute -c add -n 0.0.0.0 -g 162.10.10.1 xscf#0-lan#1
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
1622.10.0.0 * 255.255.0.0 U xscf#0-lan#0
Configure the hostname.
XSCF> sethostname xscf#0 paris
Configure the domain name.
XSCF> sethostname -d parishilton.com
You must apply the network configurations with the applynetwork command.
XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :paris
DNS domain name :parishilton.com
interface : xscf#0-lan#0
status :up
IP address :162.10.10.11
netmask :255.255.0.0
route :
interface : xscf#0-lan#1
status :down
IP address :
netmask :
route :
Continue? [y|n] :yes
Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.
Now reboot XSCF for the configuration to take effect.
XSCF> rebootxscf
After the reboot check the network settings.
XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:39:B4
inet addr:162.10.10.11 Bcast:162.10.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13160 errors:0 dropped:0 overruns:0 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1943545 (1.8 MiB) TX bytes:210 (210.0 B)
Base address:0xe000
xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:39:B5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000
Enable ssh, it will require a reboot.
XSCF> setssh -c enable
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the ssh settings.
Enable telnet. You probably do not need telnet if ssh is enabled.
XSCF> settelnet -c enable
XSCF> showtelnet
Telnet status: enabled
It is much easier to configure and manage XSCF via https as you do not have to remember all the commands. I will show you how to enable https by creating a Web Server Certificate by constructing the self CA.
First generate the web server private key. Remember the passphrase you will need it in the next step.
XSCF> sethttps -c genserverkey
Enter passphrase:
Verifying – Enter passphrase:
Create the self-signed web server certificate by speficying the DN.
XSCF> sethttps -c selfsign CA Ontario Toronto CupidPost Technology Center andrew_lin@email.com
CA key and CA cert already exist. Do you still wish to update? [y|n] :y
Enter passphrase:
Verifying – Enter passphrase:
Now enable https.
XSCF> sethttps -c enable
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the https settings.
Reboot with the rebootxscf command,
XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y
After the reboot you can connect to the XSCF console by telnet, ssh or https.
Courtesy: http://www.gamescheat.ca/2009/12/step-by-step-configuration-of-the-xscf-console-for-the-sun-sparc-m3000-server/
Wednesday, March 16, 2011
List Sub directories in tree format
Tuesday, June 30, 2009
SVM Mirror
1. Faster reads (Data gets pulled off either drive to fulfill a READ request)
2. Redundancy, (If one drive fails, you don't lose data; make sure you replace it quick though, right?)
There are CONS too ...
1. Slower WRITE transactions
2. Two drives yields one drive of capacity
... but like the old saying goes, "Fast, reliable, inexpensive. Pick any two!"
When setting up SVM you need to do a number of things:
1. Setup the partitions on each drive (with the "# format" command)
2. Setup the State Database Replicas (SDR)
3. Create the Mirrors and Submirrors
4. Link them
5. Sync them
6. Test
Our example here will assume this ...
- DRIVE 0 = c1d0 = 2 SDR's
- DRIVE 1 = c2d0 = 2 SDR's
Now there are two issues with this SDR setup ...
CASE 1. During operation, what happens when a drive "fails".
CASE 2. During a reboot, What happens with "one" failed drive.
For CASE 1 ...
During operation, if either drive fails, the system will auto fail-over to the operating drive and continue normal operation. A message will likely be posted in syslog, and "SysAdmin intervention" will be required to fix the problem. Fixing can happen at your leisure, but during this time there is "NO REDUNDANCY". Search this document for "SysAdmin intervention" and you can find out what you need to do.
For CASE 2 ...
During a reboot, with either drive in a "Failed" status, "Sysadmin intervention" is required to fix things.
NOTE: SysAdmin intervention required means to delete references to SDR's on the bad disk to make the system bootable.
(This is the best we can do with a 2 drive setup. To understand why, read "Understanding the Majority Consensus Algorithm" and "Administering State Database Replicas" in the SVM manual. The best setup is an "ODD drive SVM array". (Example: 3 drives, 3 SDR's with one per drive, or 6 SDR's with 2 per disk for further redundancy.)
OVERVIEW
STEP #1 -> Repartition Drives to accommodate State Database Replica partitions
For a 320GB drive each cylinder is about 8MB. I chose 2 cylinders for each SDR. (Slice 6; size = 2 cyl = 16MB; UFS requires minimum 10MB per partition)
Here is the partition table of DRIVE 0 (320GB) ...
Volume: DRIVE0 Current partition table (original): Total disk cylinders available: 38910 + 2 (reserved cylinders)
Partition Tag Flag Cylinders Size Blocks
0 root wm 526 - 5750 40.03GB (5225/0/0) 83939625 -> / (ROOT)
1 swap wu 3 - 525 4.01GB (523/0/0) 8401995 -> /swap
2 backup wm 0 - 38909 298.07GB (38910/0/0) 625089150 -> ENTIRE DRIVE (Leave this alone)
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 38906 - 38907 15.69MB (2/0/0) 32130 -> SDR
6 unassigned wm 38908 - 38909 15.69MB (2/0/0) 32130 -> SDR
7 home wm 5751 - 38905 253.98GB (33155/0/0) 532635075 -> /export/home
8 boot wu 0 - 0 7.84MB (1/0/0) 16065 -> GRUB Stage 1?
9 alternates wu 1 - 2 15.69MB (2/0/0) 32130 -> GRUB Stage 2?
Here is the partition table of DRIVE 1 (320GB) ...
Volume: DRIVE1 Current partition table (original): Total disk cylinders available: 38910 + 2 (reserved cylinders)
Partition Tag Flag Cylinders Size Blocks
0 root wm 526 - 5750 40.03GB (5225/0/0) 83939625 -> / (ROOT)
1 swap wu 3 - 525 4.01GB (523/0/0) 8401995 -> /swap
2 backup wm 0 - 38909 298.07GB (38910/0/0) 625089150 -> ENTIRE DRIVE (Leave this alone)
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 38906 - 38907 15.69MB (2/0/0) 32130 -> SDR
6 unassigned wm 38908 - 38909 15.69MB (2/0/0) 32130 -> SDR
7 home wm 5751 - 38905 253.98GB (33155/0/0) 532635075 -> /export/home
8 boot wu 0 - 0 7.84MB (1/0/0) 16065 -> GRUB Stage 1?
9 alternates wu 1 - 2 15.69MB (2/0/0) 32130 -> GRUB Stage 2?
NOTE: Both drives are setup identical
STEP #2 -> Confirm Boot Order in BIOS = Boot from Disk 0 (On fail, Boot from Disk 1)
(NOTE: Don't worry if your BIOS doesn't support skipping a FAILED drive and auto-booting the next drive in ORDER. Solaris may do this automatically for you. It does for me. Remove one of the drives and boot to test it out on your system.)
STEP #3 -> Specify the master boot program for DRIVE 1
# fdisk -b /usr/lib/fs/ufs/mboot /dev/rdsk/c2d0p0 -> This means make sure the drive is "Active"
STEP #4 -> Make the Secondary Disk Bootable!
# /sbin/installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c2d0s0
STEP #5 -> Create the SDR's
# metadb -a -f c1d0s5 -> Create State Database Replica (slice 5 must be unmounted File system)
# metadb -a -f c1d0s6 -> Create State Database Replica (slice 6 must be unmounted File system)
# metadb -a -f c2d0s5 -> Create State Database Replica (slice 5 must be unmounted File system)
# metadb -a -f c2d0s6 -> Create State Database Replica (slice 6 must be unmounted File system)
# metadb -i -> Check your handy work
NOTE: Yes, I know, you can create SDR's as part of an existing partition because the first portion of the partition is reserved for SDR's BEFORE you actually create the partition and put data on it. If you know how to do this, go for it. In my opinion separate partitions has it's own benefits and I personally recommend it.
STEP #6 -> MIRROR BOOT(/) Partition
At this point we need to understand the nomenclature we are about to use.
d10 = this is the name we are arbitrarily assigning to DRIVE 0 / slice 0 (the root slice or partition) d20 = this is the name we are arbitrarily assigning to DRIVE 1 / slice 0 (the root slice or partition)
NOTE: You can use your own numbering scheme. I set up one that makes sense to me, and allows me to track "what's, what"! NOTE: Nevada supports "friendly names for metadevices". (Ex. Instead of "d0" you can use "Drive-1" or "whatever".)
d0 = this represents the NEW virtual drive we are creating that represents the SVM array for the "ROOT" partition.
Graphically this is what we are creating ...
D0
/ \
D10 D20
Before SVM, you would access partitions on the D10 (or D20) drive directly like this ... /dev/dsk/c1d0s0
After SVM, you will have a 'virtual' drive in place that you will access instead. (Meaning, you won't access the drives directly anymore. Get it?)
NOTE: All references in (/etc/vfstab) will be updated to point to this new drive (d0). When SVM is active, we don't want to communicate with "/dev/dsk/c1d0s0" anymore. We want to communicate with the new VIRTUAL drive "/dev/md/dsk/d0". The metaroot command updates (/etc/vfstab) for us automatically for the "ROOT" partition. For the other partitions we need to edit (/etc/vfstab) manually.
Let's get started ...
# metainit -f d10 1 1 c1d0s0 -> d10: Concat/Stripe is setup (Note: -f= force, 1 = one stripe, 1 = one slice)
# metainit d20 1 1 c2d0s0 -> d20: Concat/Stripe is setup
# metainit d0 -m d10 -> d0: Mirror is setup
# metaroot d0 -> DO THIS ONLY for "root" partition
# metastat d0 -> View current status (View your handy work!)
# reboot -> Need to reboot to effect changes
# metattach d0 d20 -> d0: submirror d20 is attached (and Sync'ing begins magically!)
# metastat d0 -> Check the Sync'ing (See?)
NOTE: Wait for Sync'ing to finish before rebooting, otherwise I think it restarts. You can test it and tell me!
STEP #7 -> MIRROR (/SWAP) Partition
# metainit -f d11 1 1 c1d0s1 -> d11: Concat/Stripe is setup
# metainit d21 1 1 c2d0s1 -> d21: Concat/Stripe is setup
# metainit d1 -m d11 -> d1: Mirror is setup
# vi /etc/vfstab -> (Edit the /etc/vfstab file so that /swap references the mirror)
"/dev/md/dsk/d1 - - swap - no -" -> Add this line to /etc/vfstab and comment out the old line. Remember, no quotes, right?
# reboot
# metattach d1 d21 -> d1: submirror d21 is attached (and Sync'ing begins magically!)
STEP #8 -> MIRROR (/export/home) partition
# umount /dev/dsk/c1d0s7 -> First umount the partition you want to mirror (-f to force)
# metainit d17 1 1 c1d0s7 -> d17: Concat/Stripe is setup
# metainit d27 1 1 c2d0s7 -> d27: Concat/Stripe is setup
# metainit d7 -m d17 -> d7: Mirror is setup
# vi /etc/vfstab -> (Edit the /etc/vfstab file so that /export/home references the mirror)
"/dev/md/dsk/d7 /dev/md/rdsk/d7 /export/home ufs 2 yes -" -> Add this line to /etc/vfstab and comment out the old line. Again, no quotes.
# mount /dev/md/dsk/d7 /export/home -> Remount this partition
# metattach d7 d27 -> d7: submirror d27 is attached (and Sync'ing begins magically!)
STEP #9 -> TIPS
# metastat d0 -> Check Status of "d0" Mirror
# metadb -d -f c1d0s6 -> If there is trouble, you can delete an SDR
EXAMPLE: Failed DRIVE 1 and "Sysadmin intervention" required ...
To Fix the problem temporarily ...
1. Power down
2. Remove Bad Drive 1
3. Boot into single user mode
4. Remove the "bad" SDR's on the 'Failed drive", Drive 1
5. Reboot (And the System should run fine, a little slow)
When you get a replacement drive ...
1. Power down
2. Insert the replacement drive (Same size, or bigger, right?)
3. Boot into multi-user mode
4. Repartition "NEW DRIVE 1" as per specs above
5. Make sure you create the SDR's as well
6. Build and link Mirrors together as per docs above
7. Resync drives as per these 3 commands ...
# metareplace -e d0 c1d0s0 -> d0: device c1d0s0 is enabled (SYNC ONE AT A TIME!)
# metareplace -e d1 c1d0s1 -> d1: device c1d0s1 is enabled (SYNC ONE AT A TIME!)
# metareplace -e d7 c1d0s7 -> d7: device c1d0s7 is enabled (SYNC ONE AT A TIME!)
NOTE: Additional commands that are handy!
# metadetach mirror submirror -> Detach a Mirror
# metaoffline mirror submirror -> Puts Submirror "OFFLINE"
# metaonline mirror submirror -> Puts Submirror "ONLINE"; Resync'ing begins immediately
# newfs /dev/rdsk/c1d0s1 -> newfs a Filesystem
NOTE SPECIAL FILES:
# pico /etc/lvm/mddb.cf -> (DO NO EDIT) records the locations of state database replicas
# pico /etc/lvm/md.cf -> (DO NO EDIT) contains auto generated config info for the default (unspecified or local) disk set
# /kernel/drv/md.conf -> (DO NO EDIT) contains the state database replica config info and is read by SVM at startup
# /etc/lvm/md.tab -> contains SVM config info that can be used to reconstruct your SVM config (Manually)
# metastat -p > /etc/lvm/md.tab -> This file created manually (just a dump to view info; save it!)
# metainit -> This commands can use the md.tab file as input to do their thing!! Like, RECOVER DATA!
# metadb -> This commands can use the md.tab file as input to do their thing!! Like, RECOVER DATA!
# metahs -> This commands can use the md.tab file as input to do their thing!! Like, RECOVER DATA!
Sunday, June 28, 2009
Error : "Error in Semaphore Operation - No space left on device"
"Error in Semaphore Operation - No space left on device"
Solution: Refer Symantec TechNote.
http://support.veritas.com/docs/238063 (Requires reboot).
Minimum system requirements for the Solaris kernel when used with VERITAS NetBackup (tm), defined in /etc/system
Details:
This document states the minimum system requirements for tuning Solaris kernel parameters to work with NetBackup. VERITAS only provides this information as a basic starting point, as this is actually a Solaris OS configuration issue that is also impacted by other applications running on the system.
If /etc/system already contains one of these entries, use whichever value is larger for that setting. Before modifying /etc/system, use the sysdef command to view the current kernel parameters. Add the following lines to /etc/system and reboot for the settings to take effect. After rebooting the sysdef command should display the new settings:
---------------------------------------
* Message queues
set msgsys:msginfo_msgmap=500
set msgsys:msginfo_msgmax=8192
set msgsys:msginfo_msgmnb=65536
set msgsys:msginfo_msgmni=256
set msgsys:msginfo_msgssz=32
set msgsys:msginfo_msgtql=500
set msgsys:msginfo_msgseg=8192
* Semaphores
set semsys:seminfo_semmap=64
set semsys:seminfo_semmni=1024
set semsys:seminfo_semmns=1024
set semsys:seminfo_semmnu=1024
set semsys:seminfo_semmsl=300
set semsys:seminfo_semopm=32
set semsys:seminfo_semume=64
* Shared memory
set shmsys:shminfo_shmmax=16777216
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=230
set shmsys:shminfo_shmseg=100
---------------------------------------
The parameters listed below are obsolete in Solaris 8, but are still valid in Solaris 2.6 and 2.7:
set msgsys:msginfo_msgssz = The size of the message segment in bytes (multiple of word size).
set msgsys:msginfo_msgmap = number of elements in the map that is used to allocate message segments message map.
set msgsys:msginfo_msgseg = maximum number of message segments. The kernel reserves a total of msgssz * the msgseg bytes for message segments and must be less than 128 Kilobytes. Together msgssz and msgseg limit the amount of text for all outstanding messages.
set semsys:seminfo_semmap = number of entries in semaphore map
set shmsys:shminfo_shmmin = minimum shared memory segment size
set shmsys:shminfo_shmseg = maximum number of shared memory segments that can be attached to a given process at one time
If the values are present in the /etc/system file on a Solaris 8 or 9 system, the operating system will ignore them.
Tuesday, June 23, 2009
Key combinations in Bash
Key or key combination Function
Ctrl+A Move cursor to the beginning of the command line.
Ctrl+C End a running program and return the prompt
Ctrl+D Log out of the current shell session, equal to typing exit or logout.
Ctrl+E Move cursor to the end of the command line.
Ctrl+H Generate backspace character.
Ctrl+L Clear this terminal.
Ctrl+R Search command history
Ctrl+Z Suspend a program
Sunday, May 31, 2009
Solaris 10 : /sbin/bootadm update-archive : After Patch Update & before Reboot
Sunday, May 3, 2009
ZFS Features
1. Supports storage space of upto 256 quadrillion zettabytes (Terabytes - Petabytes - Exabytes - Zettabytes(1024 Exabytes))
2. Supports RAID-0/1 & RAID-Z(which is nothing but RAID-5 with enhancements. Best part is you dont need 3 disks to achieve RAID-5 on ZFS, even with 2 virtual devices, ZFS provides and good amount of redundancy.)
3. Support File System Snapshots (read-only copies of file systems or volumes.)
4. Supports creation of volumes (which can contain disks, partitions, files)
5. Uses storage pools to manage storage - aggregates virtual devices
6. ZFS File system attached to storage pool can grow dynamically as storage is added. No need to reformat or backup your data before you add any extra storage.
7. File systems may span multiple physical disks without any extra software or even efforts.
8. ZFS is transactional so its all or nothing. If a write / read operation fails for some reason, the entire transaction is rolled back.
9. Pools & file systems are auto-mounted. No need to maintain /etc/vfstab (automatically handled through XML Files.)
10. Supports file system hierarchies: /storage1/{home(50GB),var(100GB),etc.}
11. Supports reservation of storage: /storage1/{home(50GB),var}
12. Solaris 10, provides a secure web-based ZFS management tool @ https://localhost:6789/zfs
Determine the Speed & Duplex for each live NIC on the Solaris system
Determine the Speed & Duplex for each live NIC on the Solaris system
- Find the type of interface
#netstat –i
- Follow below steps depends upon NIC type
S.No | NIC Type | NIC Speed | NIC Duplex |
1 | ce | #kstat ce: If output value, 10 à 10 Mbit/s 100 à 100 Mbit/s 1000 à 1 Gbit/s | #kstat ce: If output value, 1 à Half 2 à Full |
2 | bge | #kstat bge: If output value, 10 à 10 Mbit/s 100 à 100 Mbit/s 1000 à 1 Gbit/s | #kstat bge: If output value, 1 à Half 2 à Full |
3 | iprb | #kstat iprb: If output value, 10000000 à 10 Mbit/s 100000000 à 100 Mbit/s 1000000000 à 1 Gbit/s | #kstat iprb: (Output will show half/full directly) |
4 | le | SPEED="10 Mbit/s"
| DUPLEX="half"
|
5 | Others (hme, ge, eri, qfe | #ndd -get /dev/ If output value, 0 à 10 Mbit/s 1 à 100 Mbit/s 1000 à 1 Gbit/s | #ndd -get /dev/ If output value, 1 à Half 2 à Full |