Wednesday, December 14, 2011

HP-UX : How to create an EFI Partition on a Disk

NOTE: The DSF path and HW path. For example: 0/0/2/0.0x0.0x0 maps to /dev/rdisk/disk1
Create the system, operating system, and service partitions.

# cat /tmp/partitionfile
> 3
> EFI 500MB
> HPUX 100%
> HPSP 400MB
> EOF

# idisk -wf /tmp/partitionfile /dev/rdisk/disk1
idisk version: 1.31
********************** WARNING ***********************
If you continue you may destroy all data on this disk.
Do you wish to continue(yes/no)? yes

Create the new partition device files:

# insf -eC disk


Verify that the device files were created:

# ioscan -m lun


Initialize the EFI partition:

#efi_fsinit -d /dev/rdisk/disk1_p1


Populate the /EFI/HPUX/ directory on the new disk and verify the boot files:

# mkboot -e -l /dev/rdisk/disk1
# efi_cp -d /dev/rdisk/disk1_p1 /usr/newconfig/sbin/crashdump.efi /EFI/HPUX
# efi_cp -d /dev/rdisk/disk1_p1 /usr/newconfig/sbin/vparconfig.efi /EFI/HPUX
# efi_ls -d /dev/rdisk/disk1_p1 /EFI/HPUX

Friday, December 9, 2011

configure netdump of client RHEL 4.8 on RHEL 5.5

Login to the server monroe with individual login Id and then switch to root account
Netdump server: [root@monroe ~]# uname -a Linux monroe.imation.com 2.6.18-194.26.1.el5 #1 SMP Fri Oct 29 14:21:16 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux [root@monroe ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.5 (Tikanga) [root@monroe ~]#

Netdump client: grant:/root>cat /etc/redhat-release Red Hat Enterprise Linux AS release 4 (Nahant Update 8) grant:/root>uname -a Linux grant 2.6.9-78.ELsmp #1 SMP Wed Jul 9 15:46:26 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux grant:/root>

Pre Tasks:

#uname -a
#df -k
#vgdisplay –v vg00
#lvs vg00
#cp –p /etc/fstab /etc/fstab.9DEC2011


Current information:

[root@monroe ~]# vgs vg00
VG #PV #LV #SN Attr VSize VFree
vg00 1 8 0 wz--n- 119.59G 68.59G
[root@monroe ~]#
[root@monroe ~]# lvs vg00
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
lvol0 vg00 -wi-ao 5.00G
lvol1 vg00 -wi-ao 16.00G
lvol2 vg00 -wi-ao 6.00G
lvol3 vg00 -wi-ao 5.00G
lvol4 vg00 -wi-ao 5.00G
lvol5 vg00 -wi-ao 5.00G
lvol6 vg00 -wi-ao 5.00G
lvol7 vg00 -wi-ao 4.00G
[root@monroe ~]#

Activity:Create the file system /var/crash with 40GB Size


->create a 40gb logical volume lvol8 in volume group vg00
monroe#lvcreate -L 40G -n lvol8 vg00

->create ext3 file system on newly created logical volume
monroe#mkfs -t ext3 /dev/vg00/lvol8

->Mount the volume /dev/vg00/lvol8 on mount point /var/crash
monroe#mount /dev/vg00/lvol8 /var/crash

Update the file /etc/fstab with following entries.
Take the backup of /etc/fstab

/dev/vg00/lvol8 /var/crash ext3 defaults 1 2

Verify the file sytem /var/crash has been created with 40gb size and accessible

monroe# df –h /var/crash
monroe#lvs vg00


Netdump configuration:

On Server - Monroe:

 Need to install package
Install the EPEL version of netdump-server on your RHEL 5.5, please download the package from the link below: http://download.fedora.redhat.com/pub/epel/5/x86_64/netdump-server-0.7.16-23.el5.x86_64.rpm
Downloaded already to the server.

monroe# rpm –ivh netdump-server-0.7.16-23.el5.x86_64.rpm

On the netdump server, as root, type:
monroe# passwd netdump
and supply a password for netdump just like what you do to an ordinary user. Then do the following:
monroe# chkconfig netdump-server on
monroe# service netdump-server start



Client - Grant:

 Package is already present.

grant:/root>rpm -q netdump
netdump-0.7.16-14
grant:/root>

Edit /etc/sysconfig/netdump then uncomment and set the NETDUMPADDR variable to the IP address of the netdump server. For example:
NETDUMPADDR= 172.16.109.178

Then execute:
grant# service netdump propagate
and supply the netdump password that was configured on the netdump server. Finally, execute:
grant# chkconfig netdump on
grant# service netdump start

Useful Docs:
https://access.redhat.com/kb/docs/DOC-6855
https://access.redhat.com/kb/docs/DOC-4104
If you would like to install the EPEL version of netdump-server on your RHEL 5.5, please download the package from the link below: http://download.fedora.redhat.com/pub/epel/5/x86_64/netdump-server-0.7.16-23.el5.x86_64.rpm

Monday, November 7, 2011

vsftpd How To

http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch15_:_Linux_FTP_Server_Setup

Friday, November 4, 2011

Solaris 10 Zones and Networking -- Common Considerations

Solaris 10 Zones and Networking -- Common Considerations
By stw on Feb 24, 2010

As often happens, a customer question resulted in this write-up. The customer had to quickly consider how they deploy a large number of zones on an M8000. They would be configuring up to twelve separate links for the different networks, and double that for IPMP. I wrote up the following. Thanks to Penny Cotten, Jim Eggers, Gordon Lythgoe, Peter Memishian, and Erik Nordmark for the feedback as I was preparing this. Also, you may see some of this in future documentation.
Definitions

Datalink: An interface at Layer 2 of the OSI protocol stack, which is represented in a system as a STREAMS DLPI (v2) interface. Such an interface can be plumbed under protocol stacks such as TCP/IP. In the context of Solaris 10 Zones, datalinks are physical interfaces (e.g. e1000g0, bge1), aggregations (aggr3), or VLAN-tagged interfaces (e1000g111000 (VLAN tag 111 on e1000g0), bge111001, aggr111003). A datalink may also be referred to as a physical interface, such as when referring to a Network Interface Card (NIC). The datalink is the 'physical' property configured with the zone configuration tool zonecfg(1M).
Non-global Zone: A non-global zone is any zone, whether native or branded, that is configured, installed, and managed using the zonecfg(1M) and zoneadm(1M) commands in Solaris 10. A branded zone may be either Solaris 8 or Solaris 9.
Zone network configuration: shared versus exclusive IP Instances

Since Solaris 10 8/07, zone configurations can be either in the default shared IP Instance or exclusive IP Instance configuration.

When configured as shared, zone networking includes the following characteristics.

All datalink and IP, TCP, UDP, SCTP, IPsec, etc. configuration is done in the global zone.
All zones share the network configuration settings, including datalink, IP, TCP, UDP, etc. This includes ndd(1M) settings.
All IP addresses, netmasks, and routes are set by the global zone and can not be altered in a non-global zone.
Non-global zones can not utilize DHCP (neither client nor server). There is a work-around that may allow a zone to be a DHCP server.
By default a privileged user in a non-global zone can not put a datalink into promiscuous mode, and thus can not run things like snoop(1M). Changing this requires adding the priv_net_raw privilege to the zone from the global zone, and also requires identifying which interface(s) to allow promiscuous mode on via the 'match' zonecfg parameter. Warning: This allows the non-global zone to send arbitraty packets on those interfaces.
IPMP configuration is managed in the global zone and applies to all zones using the datalinks in the IPMP group. All non-global zones configured with one datalink can or must use all datalinks in the IPMP group. Non-global zones can use multiple IPMP groups. The zone must be configured with only one datalink from each IPMP group.
Only default routes apply to the non-global zones, as determined by the IP address(es) assigned to the zone. Non-default static routes are not supported to direct traffic leaving a non-global zone.
Multiple zones can share a datalink.
When configured as exclusive, zone networking includes the following characteristics.
All network configuration can be done within the non-global zone (and can also be done indirectly from the global zone (via zlogin(1) or editing the files in the non-global zone's root file system).
IP and above configurations can not be seen directly within the global zone (e.g. running ifconfig(1M) in the global zone will not show the details of a non-global zone).
he non-global zone's interface(s) can be configured via DHCP, and the zone can be a DHCP server.
A privileged user in the non-global zone can fully manipulate IP address, netmask, routes, ndd variables, logical interfaces, ARP cache, IPsec policy and keys, IP Filter, etc.
A privileged user in the non-global zone can put the assigned interface(s) into promiscuous mode (e.g. can run snoop).
The non-global zone can have unique IPsec properties.
IPMP must be managed within the non-global zone.
A datalink can only be used by a single running zone at any one time.
Commands such as snoop(1M) and dladm(1M) can be used on datalinks in use by running zones.
It is possible to mix shared and exclusive IP zones on a system. All shared zones will be sharing the configuration and run time data (routes, ARP, IPsec) of the global zone. Each exclusive zone will have its own configuration and run time data, which can not be shared with the global zone or any other exclusive zones.
IP Multipathing (IPMP)

By default, all IPMP configurations are managed in the global zone and affects all non-global zones whose network configuration includes even one datalink (the net->physical property in zonecfg(1M)) in the IPMP group. A zone configured with a datalinks that are part of IPMP groups must only configure each IP address on only one of the datalinks in the IPMP group. It is not necessary to configure an IP address on each datalink in the group. The global zone's IPMP infrastructure will manage the fail-over and fail-back of datalinks on behalf of all the shared IP non-global zones.
For exclusive IP zones, the IPMP configuration for a zone must be managed from within the non-global zone, either via the configuration files or zlogin(1).

The choice to use probe-based failure detection or link-based failure detection can be done on a per-IPMP group basis, and does not affect whether the zone can be configured as shared or exclusive IP Instance. Care must be taken when selecting test IP addresses, since they will be configured in the global zone and thus may affect routing for either the global or for the non-global zones.

Routing and Zones

The normal case for shared-IP zones is that they use the same datalinks and the same IP subnet prefixes as the global zone. In that case the routing in the shared-IP zones are the same as in the global zone. The global zone can use static or dynamic routing to populate its routing table, that will be used by all the shared-IP zones.
In some cases different zones need different IP routing. The best approach to accomplish this is to make those zones be exclusive-IP zones. If this is not possible, then one can use some limited support for routing differentiation across shared-IP zones. This limited support only handles static default routes, and only works reliably when the shared-IP zones use disjoint IP subnets.

All routing is managed by zone that owns the IP Instance. The global zones owns the 'default' IP Instance that all shared IP zones use. Any exclusive IP zone manages the routes for just that zone. Different routing policies, routing daemons, and configurations can be used in each IP Instance.

For shared IP zones, only default static routes are supported with those zones. If multiple default routes apply to a non-global zone, care must be taken that all the default routes are able to reach all the destinations that the zone need to reach. A round robin policy is used when multiple default routes are available and a new route needs to be determined.

The zonecfg(1M) 'defrouter' property can be used to define a default router for a specific shared IP zone. When a zone is started and the parameter is set, a default route on the interface configured for that zone will be created if it does not already exist. As of Solaris 10 10/09, when a zone stops, the default route is not deleted.

Default routes on the same datalink and IP subnet are shared across non-global zones. If a non-global zone is on the same datalink and subnet as the global zone, default route(s) configured for one zone will apply for all other zones on that datalink and IP subnet.

Inter-zone network traffic isolation

There are several ways to restrict network traffic between non-global shared IP zones.
The /dev/ip ndd(1M) paramter 'ip_restrict_interzone_loopback', managed from the global zone, will force traffic out of the system on a datalink if the source and destination zones do not share a datalink. The default configuration for this is to allow inter-zone networking using internal loopback of IP datagrams, with the value of this parameter set to '0'. When the value is set to '1', traffic to an IP address in another zone in the shared IP Instance that is not on the same datalink will be put onto the external network. Whether the destination is reached will depend on the full network configuration of the system and the external network. This applies whether the source and destination IP address are on the same or different IP subnets. This parameter applies to all IP Instances active on the system, including exclusive IP Instance zones. In the case of exclusive IP zones, this will apply only if the zone has more than one datalink configured with IP addresses. The for two zones on the same system to communicate with the 'ip_restrict_interzone_loopback' set to '1' requires the following conditions.
There is a network path to the destination. If on the same subnet, the switch(es) must allow the connection. If on different subnets, routes must be in place for packets to pass reliably between the two zones.
The destination address is not on the same datalink (as this would break the datalink rules).
The destination is not on datalink in an IPMP group that the sending datalink is also in.
The 'ip_restrict_interzone_loopback' parameter is available in Solaris 10 8/07 and later.
A route(1M) action to prevent traffic between two IP addresses is available. Using the '-reject' flag will generate an ICMP unreachable when this route is attempted. The '-blackhole' flag will silently discard datagrams.
The IP Filter action 'intercept_loopback' will filter traffic between sockets on a system, including traffic between zones and loopback traffic within a zone. Using this action prevents traffic between shared IP zones. It does not force traffic out of the system using a datalink. More information is in the ipf.conf(4) or ipf(4) manual page.
Aggregations

Solaris 10 1/06 and later support IEEE 802.3ad link aggregations using the dladm(1M) datalink administration command. Combining two or more datalinks into an aggregation effectively reduces the number of datalinks available. Thus it is important to consider the trade-offs between aggregations and IPMP when requiring either network availability or increased network bandwidth. Full traffic patterns must be understood as part of the decision making process.
For the 'ce' NIC, Sun Trunking 1.3.1 is available for Solaris 10.

Some considerations when making a decision between link aggregation and IPMP are the following.

Link aggregation requires support and configuration of aggregations on both ends of the link, i.e. both the system and the switch.
Most switches only support link aggregation within a switch, not spanning two or more switches.
Traffic between a single pair of IP addresses will typically only utilize one link in either an aggregation or IPMP group.
Link aggregation only provides availability between the switch ports and the system. IPMP using probe-based failure detection can redirect traffic around internal switch problems or network issues behind the switches.
Multiple hashing policies are available, and they can be set differently for inbound and outbound traffic.
IPMP probe-based failure detection required test addresses for each datalink in the IPMP group, which are in addition to the application or data address(es).
IPMP link-based failure detection will cause a fail-over or fail-back based on link state only. Solaris 10 supports IPMP configured in only link-based mode. If IPMP is configured in probe-based failure detection, link failure will also cause fail-over, and a link restore will cause a fail-back.
A physical interface can be in only one aggregation. VLANs can be configured over an aggregation.
A datalink can be in only one IPMP group.
An IPMP group can use aggregations as the underlying datalinks.
Note, this is for Solaris 10. OpenSolaris has differences. Maybe something for another day.
Courtesy: http://blogs.oracle.com/stw/entry/solaris_zones_and_networking_common

HowTo: Configure Linux To Track and Log Failed Login Attempt Records

PAM Settings

I found that under RHEL / CentOS Linux 5.x, you need to modify /etc/pam.d/system-auth file. You need to configure a PAM module pam_tally.so. Otherwise faillog command will never display failed login attempts.

PAM Configuration To Recored Failed Login Attempts

pam_tally.so module maintains a count of attempted accesses, can reset count on success, can deny access if too many attempts fail. Edit /etc/pam.d/system-auth file, enter:
# vi /etc/pam.d/system-auth

Modify as follows:
auth required pam_tally.so no_magic_root
account required pam_tally.so deny=3 no_magic_root lock_time=180

Where,

deny=3 : Deny access if tally for this user exceeds 3 times.
lock_time=180 : Always deny for 180 seconds after failed attempt. There is also unlock_time=n option. It allow access after n seconds after failed attempt. If this option is used the user will be locked out for the specified amount of time after he exceeded his maximum allowed attempts. Otherwise the account is locked until the lock is removed by a manual intervention of the system administrator.
magic_root : If the module is invoked by a user with uid=0 the counter is not incremented. The sys-admin should use this for user launched services, like su, otherwise this argument should be omitted.
no_magic_root : Avoid root account locking, if the module is invoked by a user with uid=0
Save and close the file.

How Do I Display All Failed Login Attempts For a User Called vivek?

Type the command as follows:
# faillog -u vivek

Login Failures Maximum Latest On
vivek 3 0 12/19/07 14:12:53 -0600 64.11.xx.yy
Taks: Show Faillog Records For All Users

Type the following command with the -a option:
# faillog -a

Task: Lock Account

To lock user account to 180 seconds after failed login, enter:
# faillog -l 180 -u vivek
# faillog -l 180

Task: Set Maximum Number of Login Failures

The -m option is allows you to set maximum number of login failures after the account is disabled to specific number called MAX. Selecting MAX value of 0 has the effect of not placing a limit on the number of failed logins. The maximum failure count should always be 0 for root to prevent a denial of services attack against the system:
# faillog -M MAX -u username
# faillog -M 10 -u vivek

How do I Reset The Counters Of Login Failures?

The -r option can reset the counters of login failures or one record if used with the -u USERNAME option:
# faillog -r

To reset counter for user vivek, enter:
# faillog -r -u vivek

On large Linux login server, such as University or government research facility, one might find it useful to clear all counts every midnight or week from a cron job.
# crontab -e

Reset failed login recover every week:
@weekly /usr/bin/faillog -r

Save and close the file.

Recommended readings:

=> Read the pam_tally, faillog and pam man pages:
$ man pam_tally
$ man tally
$ man faillog


courtesy: http://www.cyberciti.biz/tips/rhel-centos-fedora-linux-log-failed-login.html

Friday, October 28, 2011

du & bdf or df output differs

it is important to explain that the results for bdf and du -sk are going to be different. We cannot expect they will match.

Obviously there is a difference in how du and bdf behave.
This may occur if we touch open files.


"du" shows output in a positive view: it shows the number of currently allocated blocks and counts the blocks you've just deleted as free.
"bdf" has a more negative perspective: it shows the free disk space available.


The difference is here: if a still-active process has allocated blocks (such as
for a logfile that you've just deleted), "bdf" counts these as still occupied.
This won't change until the process closes the file ("deallocates the blocks")
as it usually happens when the process terminates.

If you still want to know which process holds space, this tool can be helpful. Important, this is an open source tool and it is not supported by Hewlett Packard.

Lsof Examples

Below you will find a set of examples using the lsof tool.

Examples
To list all open files, use:

# lsof
To list all open Internet, x.25 (HP-UX), and UNIX domain files, use:

# lsof -i -U
To list all open IPv4 network files in use by the process whose PID is 1234, use:

# lsof -i 4 -a -p 1234
Presuming the UNIX dialect supports IPv6, to list only open IPv6 network files, use:

# lsof -i 6
To list all processes using a particular network port, use:

# lsof -i :


In our case, this will be the best options:

When you need to dismount file systems on an HP-UX based server, you frequently find users 'on' a particular disk or logical volume resulting in 'device busy' errors. You can identify which processes have open files on a given device (instead of using intuition and frantic 'phone calls!) by using the fuser(1M) (10.20, 11.x) command.

fuser will list the process ids and usernames of processes that have a given file open and can even be used to automatically kill the identified processes. For example,

# fuser -u /mydir # All processes with
# /mydir open

# fuser -ku /dev/dsk/c0t6d0 # Kill all processes
# with files open on
# certain disk

Please see the man pages for additional options.

There is also a public domain tool called lsof that can be pulled from the internet and built on HP-UX. It shows all the files open by all the processes on the system, so use it in conjunction with grep if you are looking for a particular directory on a particular disk. For example,

# lsof | grep /mydisk

will show all processes with open files on the /mydisk file system.

To get lsof proceed as follows:

Anon ftp to vic.cc.purdue.edu
cd pub/tools/unix/lsof
Get lsof.tar.Z
uncompress lsof.tar.Z
tar -xvf lsof.tar
Read README.FIRST for instructions on how to build lsof.


If you cannot access to get the lsof script, it is attached anyway

The main situation here is: It is a know and expected situation that bdf and du will display different information.

Wednesday, October 5, 2011

HP UNIX AUTH_MAXTRIES

$ su - username
Password:
Access is denied by the AUTH_MAXTRIES option in security(4).
su: Sorry
$

#userdbset -d -u username auth_failures

Wednesday, August 10, 2011

/dev/shm on Linux

/dev/shm is shared memory soace where the data is passed between programs i.e, process threads, tmpfs. It appears as a mounted file system. It is half of the available RAM by default. To increase this file system,

#mount -o remount,size=20g /dev/shm

Add in /etc/fstab for permanent across reboot.

Tuesday, June 28, 2011

xscf commands

XSCF (eXtended System Control Facility is used to control, monitor, operate, and service SPARC Enterprise series servers and domains. You can power on/off the server (domain) via the XSCF interface. As long as the server is plugged into a power source the XSCF console will always be online even though the domain (server) is off. For those who are familiar with Windows servers, the XSCF is similar to the DRAC interface for Dell servers or HP Insight Manager.

When you are connected to the XSCF console, you will be prompted for a login ID. The default ID is “default” and there is no password. With this ID you will need to create a new administrative ID. You also need to be standing close to the server for this process as you will be prompted to change the panel mode switch. If you do not create a new logon ID whenever you connect to the console or when the console session times out, you will be prompted to change the panel mode switch.

login: default
Change the panel mode switch to Locked and press return…
Leave it in that position for at least 5 seconds. Change the panel mode switch
to Service, and press return…

Check the version of XSCF.

XSCF> version -c xcp
XSCF#0 (Active )
XCP0 (Current): 1090
XCP1 (Reserve): 1090

Create a user andrew

XSCF> adduser andrew
XSCF> password
password: Permission denied

Change the password for andrew

XSCF> password andrew
New XSCF password:
Retype new XSCF password:

Grant andrew the following privileges, useradm, platadm, aplatop.

XSCF> setprivileges andrew useradm platadm platop

Here is a list of all available privileges.

domainop@n
• Can refer to the status of any hardware mounted in a domain_n.
• Can refer to the status of any part of a domain_n.
• Can refer to the information of all system boards mounted.

domainmgr@n
• Can power on, power off, and reboot a domain_n.
• Can refer to the status of any hardware mounted in a domain_n.
• Can refer to the status of any part of a domain_n.
• Can refer to the information of all system boards mounted.

platop
• Can refer to the status of any part of the entire server but cannot change it.

platadm
• Control of the entire system
• Can operate all hardware in the system.
• Can configure all XSCF settings except the useradm and auditadm privilege settings.
• Can add and delete hardware in a domain.
• Can do the power operation of a domain.
• Can refer to the status of any part of the entire server.

useradm
• Can create, delete, invalidate, and validate user accounts.
• Can change user passwords and password profiles.
• Can change user privileges.

auditop
• Can refer to the XSCF access monitoring status and monitoring methods.

auditadm
• Can monitor and control XSCF access.
• Can delete an XSCF access monitoring method.

fieldeng
• Allows field engineers to perform the maintenance tasks or change the server configuration.

None
• When the local privilege for a user is set to none, that user has no privileges, even if the privileges
for that user are defined in LDAP.
• Setting a user’s privilege to none prevents the user’s privileges from being looked up in LDAP.

XSCF firmware has two networks for internal communication. The Domain to Service Processor Communications Protocol (DSCP) network provides an internal communication link between the Service Processor and the Solaris domains. The Inter-SCF Network (ISN) provides an internal communication link between the two Service Processors in a high-end server.

Configure DSCP with an IP address using the setdscp command.

XSCF> setdscp
DSCP network [0.0.0.0 ] > 10.1.1.0

DSCP netmask [255.0.0.0 ] > 255.255.255.0

XSCF address [10.1.1.1 ] >
Domain #00 address [10.1.1.2 ] >
Commit these changes to the database? [y|n] : y

Configure the XSCF interface with an IP address, this will be the adress you connect to via telnet to manage the console.

XSCF> setnetwork xscf#0-lan#0 -m 255.255.0.0. 162.10.10.11

Enable the XSCF interface you just configured with an IP address of 162.10.10.11

XSCF> setnetwork -c up lan#0

Confiure the default route

XSCF> setroute -c add -n 0.0.0.0 -g 162.10.10.1 xscf#0-lan#1
XSCF> showroute -a
Destination Gateway Netmask Flags Interface
1622.10.0.0 * 255.255.0.0 U xscf#0-lan#0

Configure the hostname.

XSCF> sethostname xscf#0 paris

Configure the domain name.

XSCF> sethostname -d parishilton.com

You must apply the network configurations with the applynetwork command.

XSCF> applynetwork
The following network settings will be applied:
xscf#0 hostname :paris
DNS domain name :parishilton.com

interface : xscf#0-lan#0
status :up
IP address :162.10.10.11
netmask :255.255.0.0
route :

interface : xscf#0-lan#1
status :down
IP address :
netmask :
route :

Continue? [y|n] :yes

Please reset the XSCF by rebootxscf to apply the network settings.
Please confirm that the settings have been applied by executing
showhostname, shownetwork, showroute and shownameserver after rebooting
the XSCF.

Now reboot XSCF for the configuration to take effect.

XSCF> rebootxscf

After the reboot check the network settings.

XSCF> shownetwork -a
xscf#0-lan#0
Link encap:Ethernet HWaddr 00:0B:5D:E3:39:B4
inet addr:162.10.10.11 Bcast:162.10.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13160 errors:0 dropped:0 overruns:0 frame:0

TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1943545 (1.8 MiB) TX bytes:210 (210.0 B)
Base address:0xe000

xscf#0-lan#1
Link encap:Ethernet HWaddr 00:0B:5D:E3:39:B5
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Base address:0xc000

Enable ssh, it will require a reboot.

XSCF> setssh -c enable
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the ssh settings.

Enable telnet. You probably do not need telnet if ssh is enabled.

XSCF> settelnet -c enable
XSCF> showtelnet
Telnet status: enabled

It is much easier to configure and manage XSCF via https as you do not have to remember all the commands. I will show you how to enable https by creating a Web Server Certificate by constructing the self CA.

First generate the web server private key. Remember the passphrase you will need it in the next step.

XSCF> sethttps -c genserverkey
Enter passphrase:
Verifying – Enter passphrase:

Create the self-signed web server certificate by speficying the DN.

XSCF> sethttps -c selfsign CA Ontario Toronto CupidPost Technology Center andrew_lin@email.com
CA key and CA cert already exist. Do you still wish to update? [y|n] :y
Enter passphrase:
Verifying – Enter passphrase:

Now enable https.

XSCF> sethttps -c enable
Continue? [y|n] :y
Please reset the XSCF by rebootxscf to apply the https settings.

Reboot with the rebootxscf command,

XSCF> rebootxscf
The XSCF will be reset. Continue? [y|n] :y

After the reboot you can connect to the XSCF console by telnet, ssh or https.

Courtesy: http://www.gamescheat.ca/2009/12/step-by-step-configuration-of-the-xscf-console-for-the-sun-sparc-m3000-server/

Friday, May 20, 2011

RHEL : sysreport/sosreport from the rescue environment

Please follow the below kbase article on how to create sosreport from rescue mode

How do I generate a sysreport/sosreport from the rescue environment?
https://access.redhat.com/kb/docs/DOC-2873

LSOF for HP-UX

LSOF for HP-UX can be downloaded from -

http://hpux.connect.org.uk/hppd/hpux/Sysadmin/lsof-4.84/

Wednesday, May 4, 2011

nmiwatchdog, kdump

configure :- 1. nmiwatchdog
https://access.redhat.com/kb/docs/DOC-4128

2. How do I configure kexec/kdump on Red Hat Enterprise Linux?
https://access.redhat.com/kb/docs/DOC-6039

3. How can I collect system information provided to Red Hat Support for analysis when system hangs?
https://access.redhat.com/kb/docs/DOC-35282

Friday, April 8, 2011

Linux : FTP Home directory not found

Set Boolean Policy

# getsebool -a | grep ftp (show ftp policy)

# setsebool -P allow_ftpd_full_access=on
# setsebool -P ftp_home_dir=on

Wednesday, March 16, 2011

List Sub directories in tree format

ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'

Tuesday, February 22, 2011

Powerpath issue on HP Unix

Recent Issue:

Server was ignited earlier using OS image of one of the similar hardware model.
We had some issues on performance and found Powerpath issue. Removed the powerpath and installed it again, It required kernel rebuild and started system reboot. But system was not able to boot. Below is the error faced.


ERROR: phy_unit_init: Could not open file: /tmp/ign_configure/hw.info: No
such file or directory (errno = 2)
ERROR: read_hw_info failed

The configuration process has incurred an error, would you like
to push a shell for debugging purposes? (y/[n]):

Solution:
When powerpath software was uninstalled, it restored the pre-ignited image script to the startup files. This caused the server boot to fail checking for ignited configuration files in /tmp directory.

The backup files are saved during the post-installation when the server is ignited. In general the files should not be altered but PP package removal seems to behave differently.
The start scripts which were sourcing for ignited files were removed, rescanned the hardware, and the server was fully tested without PowerPath 5.1 installed.

After validation, we installed Power Path 5.1

learn:/sbin/rc2.d>swlist | grep -i emc
EMCpower HP.5.1.SP2_b113 PowerPath

After installation, the power path does not work as expected and behave the same way as the earlier version

learn:/sbin/rc2.d>sh S999emcp start
Unexpected error occured.
emcpmgr: internal library error (0xebad002)
Error: unable to update device configuration file(s)
No migrations found.

learn:/sbin/rc2.d>powermt display dev=all
Initialization error.

After numbers of attempts with remove/install PP, We finally found the main problem. It is actually a bug with PP and from versions like 4.x it is still there. What happened was that the entry in the /etc/inittab file for PP initialization which is running BEFORE the /sbin/init.d/emcp script and loads kernel modules to the host was missing on the host. /etc/inittab should have been updated during installation, but because that bug, it was not and PP kernel modules could not load at boot.

The entry below has been added to the /etc/inittab file and after reboot PPworked fine.
pwr::sysinit:/sbin/emcpstartup /dev/console 2>&1 # PowerPath

We should have noticed that earlier, but found it at the end.

You can find the solution on Powerlink article emc93018.