Showing posts with label Unix. Show all posts
Showing posts with label Unix. Show all posts

Monday, May 8, 2017

How to find the console IP address in HP-UX Servers

In this post, I will guide you to how to find the console IP address in HP-UX operating system if we lost or forgotten. So here, I will provide you the simple steps to get back the console IP address of HP-UX system.

For Itanium systems running 11.31 version only we can use smh and the cprop command. These both method works to recover the forgotten password.

cprop is what the smh webpage uses to retrieve the IP address. Run the smh webpage (hostname:hpx)

Home -> System -> Management Processor

It is show you the IP address of console port. Also please find the below "cprop" command usage, how to we run this command.


hpx:/>/opt/propplus/bin/cprop -summary -c “Management Processor”

[Component]: Management Processor
[Table]: Management Processor
——————————————————-
****************************************************
                   [Hash ID]: Management Processor:11a64d1ax41eed6da
                    [Status]: OK
                 [IPAddress]: 192.168.1.23
                       [URL]: https://192.168.1.23
                     [State]: Enabled
****************************************************

For PARISC boxes and Itanium models running 11.23 or earlier, you must use the serial port on the back of the box. The commands will be:

ctrl-a (to get the attention of the console interface)
<login> typically Admin and Admin
CM (to get to the command menu)
LC
MA (to exit the CM menu)
X  (to exit the GSP or MP menu)

You need to connect a serial cable on Serial port of the HP-UX server and connected to own laptop. Once you will get the ssh or telnet session from serial connection, you need to run the above command to get the IP address of HP-UX console.

Sometimes the serial port is not working then in this case you need to login in serial management port of the server and recover the IP address again.

Please revert back if you are encounter any issue related to this post.

How to find the boot disk from HP-UX operating system

In this article , I would explain to you how we find which disk are used to boot the running HP-UX operating system. This is a bit tricky because its depending on the version of HP-UX, and whether it is using LVM or the less common choice.

For LVM disk layouts:

For 11.11 and earlier, use the below command to check which disk is in used.

# echo “boot_string/S” | adb -k /stand/vmunix /dev/kmem
    boot_string:
    boot_string:    disk(0/0/2/0.6.0.0.0.0.0;0)/stand/vmunix

For 11.23, there are different ways for PARISC versus IA64:

PARISC:

# echo “boot_string/S” | adb -o /stand/vmunix /dev/kmem
    boot_string:
    boot_string:    disk(1/0/0/3/0.6.0.0.0.0.0;0)/stand/vmunix

IA64 (Itanium/Integrity):

# echo “bootdev/x” | adb -n /stand/vmunix /dev/kmem
    bootdev:
        0x100001c

Now to find the actual path, you’ll have to match the 0x100001c value to a minor number in the /dev/disk directory. Compare only the last 6 digits of the number (00001c) to find the device file. Then by using lssf, you can decode the hardware path:

    # DSK=$(ll /dev/disk | awk ‘/00001c/{print $NF}’)
    # echo $DSK
    disk11_p2

    # HWPATH=$(lssf /dev/disk/$DSK | awk ‘{print $(NF-1)}’)
    # echo “$DSK path = $HWPATH”
    disk11_p2 path = 64000/0xfa00/0xa

You can also use ioscan -m dsf to map agile device file names to legacy (CTD) style.

For VxVM disk layouts:

# echo “raw_root/X” | adb -o /stand/vmunix /dev/kmem
     raw_root:
     raw_root:       0x3000002

This value is the minor number for the disk that was used to boot the current system. The minor number is found in the /dev/vx/dmp directory.

     # DSK=$(ll /dev/vx/dmp | awk ‘/000002/{print $NF}’)
     # echo $DSK
    c2t1d0s2

     # HWPATH=$(lssf /dev/dsk/$DSK | awk ‘{print $(NF-1)}’)
     # echo “$DSK path = $HWPATH”
     c2t1d0s2 path = 0/1/1/0.1.0

For completeness, I should mention that 11.31 will report the boot disk path in syslog.log (LVM or VxVM) like this:

vmunix: Boot device’s HP-UX HW path is: 0/1/1/0.0×1.0x0

However, syslog.log is a catch-all for a lot of items and often needs to be truncated when it grows too large. As a result, it can’t be relied on to always contain the current boot disk.

Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)

Hello Friends,

In this post, I would explain to you how to resolve one of the most intreseting issue on linux servers "Kernel Panic". I think the linux lovers much awaited such issue while they are working on Unix troubleshooting part.

As everyone knows Kernel are most importent part of any operating system and if it is crashed that's means your system crashed.

Normally kernel panic occur when you upgrade the server or upgrade the packages on the server. Here, you can find one of the Kernel panic issue as describe below.

Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
Pid: 1, comm: swapper Not tainted 2.6.32-502.el6.x86_64 #1
Call Trace:
 [<ffffffff815292bc>] ? panic+0xa7/0x16f
 [<ffffffff81c2b432>] ? mount_block_root+0x216/0x2cb
 [<ffffffff81002930>] ? bstat+0x2b0/0x980
 [<ffffffff81c2b53d>] ? mount_root+0x56/0x5a
 [<ffffffff81c2b6b1>] ? prepare_namespace+0x170/0x1a9
 [<ffffffff81c2a92a>] ? kernel_init+0x2e1/0x2f7
 [<ffffffff8100c20a>] ? child_rip+0xa/0x20
 [<ffffffff81c2a649>] ? kernel_init+0x0/0x2f7
 [<ffffffff8100c200>] ? child_rip+0x0/0x20


Solution : Normally this issue occur while you upgrade your server using yum and after reboot server is not boot due to kernel panic problem. This problem occur while kernel version has not been changed properly.

You can boot your linux server in grup mode and check the kernel and initrd paramtere. Match both the parameter if they are using same kernel version. During upgrade the initrd version normally not changed so you can manually change the correct kernel version entry in this file.

Once you modified the initrd files boot the server and make a same changes in grub.conf file too, otherwise once you take a reboot or shutdown the machine same kerenl panic error will occur.

[root@localhost]# vi /etc/grub.conf

#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS 6 (2.6.32-504.el6.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-504.el6.x86_64 ro root=/dev/mapper/vstorage-root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vstorage/root  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
        initrd /initramfs-2.6.32-504.el6.x86_64.img

In above example, the default value 0 identifies the first title option to be used as default, the corresponding kernel version in title option line as 2.6.32-504.el6.x86_64 and the initramfs image file as initramfs-2.6.32-504.el6.x86_64.img. 

Before changing thing please take a backup of grub.conf file. now you can rename the kernel version in the initrd line and matched the same version at both the line and reboot the server, after reboot your linux machine will boot normally.

Hope your kernel panic issue will be resolved after reading this post. Please let me know if you are still facing this issue, I would try to help you.

Friday, March 24, 2017

HP-UX Logical Volume Manager (LVM) Commands with an Example

In this post, You can get an idea about HP-UX logical volume manager commands with an example. As you know LVM is basically used for disk management in operating system that allow to manager the physical disks and logical volume.

Please find the below HP-UX LVM commands with an example.

1. Create a new volume group, logical volume and file system:

You can used the below command in HP-UX operating system to create a new volume group, logical volume and file system.

hpx:/>pvcreate /dev/rdsk/c2t1d0

For creating a new volume group first we need to create physical volume as describe in above command.

hpx:/>mkdir /dev/vg01
hpx:/>mknod /dev/vg01/group c 64 0x010000

In above step we will create a directory where we need to create a volume group.

hpx:/>vgcreate /dev/vg01 /dev/dsk/c2t1d0

After successfully creation of volume group we will create a new logical voulme as describe in below command.

hpx:/>lvcreate -L 2048 /dev/vg01

hpx:/>newfs -F vxfs -o largefiles /dev/vg01/vgvol1

Using above command we create a new file system now in next step we will create a directory where we need to mount the newly created file system.

hpx:/>mkdir /backup
hpx:/>mount /dev/vg01/vgvol1 /backup

Once you mount the logical voulme with file system you can run the file system checking command to verify that mounting is succesfully or not.

2. Create a stripped filesystem:

In this, we will create a stripped file system with the help of volume group and logical voulme.

hpx:/>lvcreate -i 2 -I 32 -L 48 -n vgvol1 /dev/vg01

-i number of stripes
-I stripe size of 32KB
-L size of the volume

3. HP-UX display boot information:

You can use the below command to display boot information.

hpx:/>lvlnboot -v /dev/vg00

Boot Definitions for Volume Group /dev/vg00:
Physical Volumes belonging in Root Volume Group:
        /dev/dsk/c2t0d0 (0/1/1/0.0.0) -- Boot Disk
        /dev/dsk/c2t1d0 (0/1/1/0.1.0) -- Boot Disk
Boot: lvol1     on:     /dev/dsk/c2t0d0
                        /dev/dsk/c2t1d0
Root: lvol3     on:     /dev/dsk/c2t0d0
                        /dev/dsk/c2t1d0
Swap: lvol2     on:     /dev/dsk/c2t0d0
                        /dev/dsk/c2t1d0
Dump: lvol2     on:     /dev/dsk/c2t0d0, 0

When you run the above command you can find the above output , if you see the boot information you can find you have two disk which is available for boot.

4. HP-UX display all disks system information:

hpx:/> ioscan -funC disk
Class     I  H/W Path        Driver   S/W State   H/W Type     Description
==============================================================
disk      0  0/0/2/0.0.0.0   sdisk    CLAIMED     DEVICE       TEAC    DV-28E-N
                            /dev/dsk/c0t0d0   /dev/rdsk/c0t0d0
disk      1  0/1/1/0.0.0     sdisk    CLAIMED     DEVICE       HP 146 GMAX3147NC
                            /dev/dsk/c2t0d0   /dev/rdsk/c2t0d0
disk      2  0/1/1/0.1.0     sdisk    CLAIMED     DEVICE       HP 146 GMAX3147NC
                            /dev/dsk/c2t1d0   /dev/rdsk/c2t1d0

In the above output you can found the all the disk which is available in the system.

5. HP-UX display dump devices:

hpx:/> lvlnboot -v

Normally it is showing the boot information in which you can check the dump devices name.

Wednesday, March 22, 2017

Solaris Processes Monitoring - prstat

In this post, I will explain you which commands is used for display the Solaris zone process information.

Using "prstat" command we can display the solaris zone process information. This command generate the reports information about the processes and zones.

The prstat statistics utility shows a summary of the processes that are using system resources currently. The prstat utility summarizes this information for an every 5 seconds by default and reports the statistics for that period.

Display the zones process informtion:

[sun]# prstat -Z

 PID USERNAME  SIZE   RSS STATE   PRI NICE      TIME  CPU PROCESS/NLWP
 18638 20159    2118M 1502M cpu43    10    0   0:00:44 2.4% oracle/7
 20927 24076    8784K 8136K cpu127    0    2   0:00:17 1.8% prstat/1
   897 24865     916M  512M sleep    59    0  27:52:32 0.7% java/95
 17511 26055     599M  285M sleep    59    0  35:08:33 0.6% java/115
 12540 26055     951M  341M sleep    59    0  31:00:01 0.5% java/101

 ZONEID    NPROC  SWAP   RSS MEMORY      TIME  CPU ZONE
     8     3427   73G   27G    21%  44:01:48 2.8% sunz01
     0      100  465M  132M   0.1% 507:26:46 2.6% global
     6     4056   86G   32G    25%  35:23:30 2.3% sunz02    
Total: 13382 processes, 36594 lwps, load averages: 9.11, 9.27, 9.47

Using above command "prstat -Z" we can monitor the server process utilization in every 5 second. This command is show you the global zone & their local zone process utilization only.

If you want to only specific zone process utilization then you can use the below command.

[sun]# prstat -z sun01

This command output show you only process utilization of specific zone only.

Note:

-Z Reports information about processes and zones.
-z Reports information about a particular zone.

You can use the below syntax to monitor the Global zone as well as only specific zone.

Sun Solaris zonecfg commands

In the last post, I had explained the usage of zoneadm command for Solaris servers, now in this post we will get the information about the "zonecfg" command. 

This is very useful command which is mostly used during configuration of new zone or installed zone as well as for removal of any zone configuration files.

1. Command for creating a Solaris zone:

Please find the below commands to creating a zone on Sun Solaris server. The command must be run on Global zone with root user.

[sun]#zonecfg -z <zone>

Example: [sun]#zonecfg -z sunz01

Once you run the above command, you can enter in the configuration file, where you can add or edit the configuration for the zone.

2. Command for deleting a Solaris zone:

Please find the below command to deleting a Solaris zone from Global zone.

[sun]#zonecfg -z <zone> delete -F

Example: [sun]#zonecfg -z sunz01 delete -F

using above command you can delete or remove the configuration file of solaris local zone.

Note: You need to shutdown and uninstall the local zone before deleting the configuration files of Sun Solaris local zone.

3. Command for display zone current configuration:

Please find the below command to display the current configuration of local Solaris zones.

[sun]#zonecfg -z <zone> info

Example: [sun]#zonecfg -z sunz01 info

This command output show you the zonename, zonepath, autoboot etc attributes information of the solaris zone.

4. Command for zone creation file:

Please find the below command to create a solaris zone creation file.

[sun]#zonecfg -z <zone> export

Example: [sun]#zonecfg -z sunz0 export

Once the creation file has been export you can use this file to another zone creation as well as for restoration of this local zone if any problem occur on this zone.

Thursday, March 16, 2017

Step by Step Configuration of NTP Server on HP-UX Server

In this post, I would like to explain how we configure the NTP (network time protocol) server on HP-UX operating system server. In my recent post you can found the NTP configuration on Solaris and AIX platform. 

As you know NTP ( Network time Protocol) is one of the oldest internet protocol still in use and it allows the synchronization of computer clocks distributing UTC (Coordinated Universal Time) over the network. It is basaiclly used for time synchronization on Unix servers.

Step by Step Configuration of NTP Server on HP-UX:

1. In the first step we will check the configuration files of "xntpd" daemon. By default the configuration file for this daemon is "/etc/rc.config.d/netdaemons".

# vi /etc/rc.config.d/netdaemons

######################################
# xntp configuration.  See xntpd(1m) #
######################################
#
#  Time synchronization daemon
#
# NTPDATE_SERVER: name of trusted timeserver to synchronize with at boot
# (default is rootserver for diskess clients)
# XNTPD:        Set to 1 to start xntpd (0 to not run xntpd)
# XNTPD_ARGS:  command line arguments for xntpd
#
# Also, see the /etc/ntp.conf and /etc/ntp.keys file for additional
# configuration.
#
export NTPDATE_SERVER=
export XNTPD=0
export XNTPD_ARGS=

This is default configuration entry of this file so for xntpd daemon we need to change the variable which is defined.

export NTPDATE_SERVER='ntp.in.pool.org'
export XNTPD=1
export XNTPD_ARGS=

Note: You must change the NTPDATE server name.

2. For ntp config please set the correct timezone is setup in /etc/TIMEZONE file.

hpx:/> cat /etc/TIMEZONE
TZ=IST-5:30
export TZ

You can edit the file in vi editor and change the time zone as per your location.

3. Now, we need to make some changes in NTP configuration files. 

hpx29:/> cat /etc/ntp.conf
#Configuration NTP des serveurs
server ntp.in.org.com
server ntpin.in.org.com

You need to replace ntp server name accordingly. In my post I will use dummy server name.

4. After setting the NTP server name we need to restart the NTP service on HP-UX operating system and verify the ntp configuration.

hpx:/> /sbin/init.d/xntpd restart

hpx:/> ntpq -p

If it is showing you correct ntp server information now. You can match these information with the NTP server name which we use in above step.

Tuesday, March 14, 2017

How to configure NTP Server on AIX Operating system

In the last post, I had explained the NTP (Network Time Protocol) on Solaris 10 & 11 server. Now in this post, I will explain the same NTP server and client configuration for AIX operating system.

As you know NTP ( Network time Protocol) is one of the oldest internet protocol still in use and it allows the synchronization of computer clocks distributing UTC (Coordinated Universal Time) over the network.

Step by Step Configuration of NTP:

1. In the initial step we must verify that we have check the available NTP server on AIX server. For this please run the below command.

AIX:/>lssrc -ls xntpd
-----------------------------------------------
 Program name:    /usr/sbin/xntpd
 Version:         3
 Leap indicator:  00 (No leap second today.)
 Sys peer:        ntp.aix.in.com
 Sys stratum:     4
 Sys precision:   -18
 Debug/Tracing:   DISABLED
 Root distance:   0.014709
 Root dispersion: 0.066422
 Reference ID:    192.168.1.22
 Reference time:  dc721077.d3a8e000  Tue, Mar 14 2017  7:47:19.826
 Broadcast delay: 0.003906 (sec)
 Auth delay:      0.000122 (sec)
 System flags:    pll monitor filegen
 System uptime:   19248381 (sec)
 Clock stability: 0.000107 (sec)
 Clock frequency: 0.000000 (sec)
 Peer: ntp.aix.in.com
      flags: (configured)(sys peer)
      stratum:  3, version: 3
      our mode: client, his mode: server
 Peer: ntpuk.aix.in.com
      flags: (configured)(sys peer)
      stratum:  3, version: 3
      our mode: client, his mode: server
Subsystem         Group            PID          Status
xntpd            tcpip            8520514      active
------------------------------------------------------

You can found the above output once you run the above command to check the available NTP server. On my AIX machine if you see the sys peer should show a valid server (ntp.aix.in.com). If the server is not showing any ntp server then we need to correct it by adding a server line into /etc/ntp.conf and will take restart of "xntpd" services.

Note : In this post I will use my dummy NTP name instead of real NTP server because of security reason.

2. As your NTP server is not configured and it is show "insame" then you need to add manual entry on the NTP configuration file.

AIX:/>vi /etc/ntp.conf

server ntp.aix.in.com
server ntpuk.aix.in.com

Once you added these ntp server entry manually on the configuration file then please take a restart of NTP services.

AIX:/>stopsrc -s xntpd
AIX:/>startsrc -s xntpd

Using above command we can stop and start the "xntpd" service on AIX operating system.

3. In this step you need to again verify the status of newly added NTP server.

AIX:/>lssrc -ls xntpd

It is taking some time that time because it synchronize process is running. Once the synchronization has been complete and you run the above command you can found the NTP server entry as describe in Step 1.

Step by Step configuration of NTP Client:

1. On the client machine you need to again verify that you have a server suitable for synchronization or not. For this please run the below command.

AIX:/>ntpdate -d ntp.aix.in.com
-----------------------------------------------------------
14 Mar 08:16:21 ntpdate[64356890]: 3.4y
transmit(192.168.1.22)
receive(192.168.1.22)
transmit(192.168.1.22)
receive(192.168.1.22)
transmit(192.168.1.22)
transmit(192.168.1.22)
transmit(192.168.1.22)
server 192.168.1.22, port 123
stratum 16, precision -6, leap 11, trust 000
refid [63.15.23.11], delay 0.03688, dispersion 24.00334
transmitted 4, in filter 4
reference time:      00000000.00000000  Thu, Feb  7 2036  7:28:16.000
originate timestamp: dc721745.3ff1b000  Tue, Mar 14 2017  8:16:21.249
transmit timestamp:  dc721746.3d08a000  Tue, Mar 14 2017  8:16:22.238
filter delay:  0.03688  0.05624  0.00000  0.00000
               0.00000  0.00000  0.00000  0.00000
filter offset: -0.00081 -0.00750 0.000000 0.000000
               0.000000 0.000000 0.000000 0.000000
delay 0.03688, dispersion 24.00334
offset -0.000812

14 Mar 08:16:23 ntpdate[64356890]: no server suitable for synchronization found
--------------------------------------------------------------------------

If you get the message ," no server suitable for synchronization found", verify xntpd is running on the server also verify that no firewalls are blocking port 123.

2. If the no server suitable for synchronization then you must specify the xntpd server in /etc/ntp.conf.

AIX:/>vi /etc/ntp.conf

server ntp.aix.in.com

Once you added the NTP server entry on client configuration file then restart the "xntpd" service again.

AIX:/>startsrc -s xntpd

3. If you want to start the xntpd service on boot time then you need to uncomment the below lines on the configuration file.

AIX:/>vi /etc/rc.tcpip

Unconmment the following line

start /usr/sbin/xntpd "$src-running"

4. Now verify the NTP server on client machine has been synchronized or not. Please use the same command which we used for checking the status.

AIX:/>lssrsc -ls xntpd

This time on the NTP client machine sys peer should display the IP Address or name of your "xntpd" server. As you know it is taking some time to synchronization so you must wait for time.

Friday, March 10, 2017

How to configure NTP server and client on Solaris 10 and Solaris 11

In this post, I will describe the step by step method that how we will configure the NTP server and client on Solaris 10 and Solaris 11 Operating system. Network time protocol is most important part of any UNIX operating system. We will setup the NTP daemon on both the operating system but before moving to main point we will understand first NTP mechanism.

As you know NTP ( Network time Protocol) is one of the oldest internet protocol still in use and it allows the synchronization of computer clocks distributing UTC (Coordinated Universal Time) over the network.

NTP Service on Solaris 10 and Solaris 11:

Solaris 10 used the SMF utility (Service Management Facility) and the NTP service is now managed by SMF (Service Management Facility). NTP daemon configured using Service management facility (svc:/network/ntp:default) and a bunch of sample ntp.conf files to quickly configure a machine as a client or as a server. On Solaris 11 only ships with NTP v. 4, the NTP v. 4 service is identified by the name ntp4. You can check the ntp status using below command.

sun# svcs status ntp

STATE  STIME   FMRI
online 10:14:23 svc:/network/ntp:default

If you see the above command output it is shown that network time protocol services is enable and online on the server.

Steps for Configuring a NTP client:

Suppose your machine is just a client machine, then you can just take the /etc/inet/ntp.client file and copy it to /etc/inet/ntp.conf.

multicastclient 127.0.0.1

If you see the configuration it's a passive configuration for a Server host which listens for NTP server putting packets on the NTP multicast network, 127.0.0.1. If your machine is on LAN without NTP server then in that case we are not recveied any packet and for this we need to use Public NTP server for host.

In my case, I'm using the Indian pool in.pool.ntp.org and my configuration file contains:

server 2.in.pool.ntp.org
server 1.asia.pool.ntp.org
server 3.asia.pool.ntp.org

Normally NTP requires a poll period to elapse before starting synchronizing your clock. If you want NTP to start immediately, which you most probably will if you're configuring a desktop environment, you can take advantage of iburst keyword, introduced in NTP v. 4: it instructs NTP to start the synchronization almost right away.

server 2.in.pool.ntp.org ibrust
server 1.asia.pool.ntp.org ibrust
server 3.asia.pool.ntp.org ibrust

You must make sure you're configuring NTP implementation corresponding to the syntax you're using.

Setting up the drift file:

The last thing which is remaining for NTP server setup in the client machine is to set up  drift file location. On my machine it is 

driftfile /var/ntp/ntp.drift

After setup the drift file configuration we will start the NTP servivce again 

sun# svcadm restart svc:/network/ntp:default
sun# svcs status svc:/network/ntp:default
STATE  STIME   FMRI
online 12:20:12 svc:/network/ntp:default 

Once the service is running, you can check which server you're using with ntpq, Please run the below command to check the ntpq.

sun# ntpq -p

Setting up an NTP server:

Now in above step you see the NTP service has been started successfully, so now, you'll probably want to setup all of your machines.

If you're in a LAN, you can setup an internal NTP server which will provide data to other clients on your LAN. As before, you can take inspiration from the server configuration file shipped with Solaris 10 or Solaris 11, /etc/inet/ntp.server.

After setting up the drift file and the clients you're going to use, you can examine the other options and fine-tune them at your taste. Let's give a quick look at it.

server 127.127.XType.0

Now you have configured the NTP server properly. Please comment on the post if you have any suggestion.

Thursday, March 9, 2017

Solaris Package administration in Solaris 10

In this post, we will get the knowledage about the package administartion on Solaris operating system. As you aware that on solaris server the packages administration work is quite different from other Unix operating system. Solaris system has different architecture for this one.

So, we will discuss first which command is generally used on Solaris 10 server which are listed below.

pkginfo- It displays all software package information.
pkgadd- It installs all software packages to the system.
pkgrm- It removes a package from the system.
pkgchk- It checks package installation state.
pkgtrans- It translates packages from one server format to another.

Commands & Syntax for checking a package information:

For checking a packages information on Solaris 10 operating system we will used the "pkginfo" command. You can find the all command and syntax related to "pkginfo" is listed below.

  • Please use the below command to display information about installed software packages.

          sun# pkginfo | more

  • Please use the below command to view additional information.

          sun# pkginfo -l | more

  • Please use the below command to view information of a specific package.

          sun# pkginfo -l SUNWman

  • Please use the below command to find how many packages are currently installed.

          sun# pkginfo | wc -l

  • To list all installed software packages, please use the below command.

          sun# more /var/sadm/install/contents

These above commands with their syntax are daily used on Solaris 10 platform.

Commands & Syntax for checking a package installation:

For checking an information about packages are installed or not on the server , we need to use "pkgchk"  command. Please find the below example as describe below.

  • Please use the below command to check the contents & attributes of a currently installed package.

          sun# pkgchk SUNWpkgs

  • Please use the below command to list the all files contained in a software package.

          sun# pkgchk -v SUNWpkgs

  • Please use the below commands to find if the contents & attributes of a file have changed since it was installed with its software package.

          sun# pkgchk -p /etc/shadow  

  • Please use the below commands to list information about selected files that make up a package.

          sun# pkgchk -l -p /usr/bin/showrev

If the packages is installed already then in that case "pkgchk" command don't show any output that clear meaning is that packages is already installed on the server.

Commands & Syntax for adding a package software:

For adding a packages on the server, we will used the "pkgadd" command. Please find the below commands and their syntax which is daily used on Solaris operating system.

  • Please use the below command to add a software packages from DVD. For this you need to move on dvd directory where all the packages are listed.
          sun# pkgadd -d . SUNWpkgs

Using above command you can add the packages from DVD.

Commands & Syntax for removing a package software:

For removing the packages from the server, we will used "pkgrm" command. Please find the below commands and their syntax as listed below.

  • Please use the below command to remove the software packages.

          sun# pkgrm SUNWpkgs

  • Please use the below command to remove a package from the spool directory.

          sun# pkgrm -s /export/pkg SUNWldam

Commands & Syntax for translating a packages format:

For translating a packages format from one format to another , we will used the "pkgtrans" command for the same.

  • Please use the below command to translate a package from file system format in /var/tmp to data stream format.

          sun# pkgtrans /var/tmp /tmp/SUNWpkgs.pkg SUNWpkgs

  • Please use the below command to create a data streamed package.

          sun# pkgtrans -s Product /var/tmp/stream.pkg SUNWpkgg SUNWpkgs

 Using above commands and synatx we can easily translate their file format to stream format. If you have any doubt regarding this post please comment on the post.

How to enable SAR (System Activity Reporter) on Solaris Server

In this post, You can find the information one of the most important monitoring tool application on Solaris operating system. SAR (System Activity Reporter) is used to troubleshoot the performance issue on Sun Solaris Servers.

Using SAR (System Activity Reporter) we can troubleshoot or monitored the disk, memory or cpu performance issues on the Solaris operating system servers.

It is widely used performance tools for monitoring purpose but this utility also have some disadvantages. SAR utility consume lot of disk space when it is generated the report as well as /var file system space get increase rapidly.

Now in the below post we will step by step method to enable the SAR on Solaris Operating system.

Step by step procedure to enable SAR (System Activity Reporter):

1. In the first step we will check the current service status of SAR. To check this thing we will used below command which is mention below.

sun#svcs status sar
disabled        Mar_9  svc:/system/sar:default
or
sun#svcs -a | grep -i sar
disabled        Mar_9  svc:/system/sar:default

If you see the current status of SAR service it is disable. You can use both the above syntax to find out the current service status.

2.  As you seen in above step, the SAR service is disable on the Sun Solaris system so in this step we will enable it.

sun#svcadm enable svc:/system/sar:default

Check the status of service again as per below command.

sun# svcs status svc:/system/sar:default
enabled        Mar_9  svc:/system/sar:default

3.  Now in this step, we will make a setup for automatic data collection. Normally once we enable the services of SAR, the default script for SAR utility are located the below directory location.

/usr/lib/sa/sa1: This is a shell script to collect and store data in the binary file /var/adm/sa/sadd, where dd is the current day.

/usr/lib/sa/sa2: This is a  shell script for generating daily report in the file /var/adm/sa/sardd, where dd is current day.

As these above script are used normally to collect the automatically data from Solaris Server. If you required the daily report or weekly report then you need to add both the script in crontab file which is describe in next step.

4.  If you required the SAR report regularly then you need to make an entry of above script on the crontab file.

#crontab -e

Using these command you can edit the existing file and make an entry of above script according to your requirement when you want to generate the report.

Please comment on the post, if you have any issue related to this SAR post.

Friday, March 3, 2017

NFS mount on Solaris 11 Non-Global zones

In this post, we would learn how we mount one folder from one Non-Global zone to another zone on Solaris 11 operating system.

As you know, in linux server it is less difficult in comparision to Solaris server. Here, I will take a two local zone "sun01" & "sun02". Let's take an example, we will mount one folder named "/export/backup" from "sun01" local zone to another zone "sun02" on "/project/export/data" location.

Step by Step method of NFS mount on Solaris 11:

1. In the first step we will create the directory on "sun02" zone where we want to mount the folder. 

sun02#mkdir /project/export/data

2. In second step, we will make a configuration for this process. So for this work you need to login on global zone with root access and make an entry on the dfstb configuration file.

sun#vi /etc/dfs/dfstab

share -F nfs -o rw=sun02 /zones/sun01/root/export

If you see the above entry, we have provided the read/write access to directory on sun02 server where we mount the folder from sun01 local zone.

3. In next step you need to login on sun02 server and mount the shared folder using below command.

sun02#mount sun:/zones/sun01/root/export/backup /project/export/data

4. Once you run the above command the folder is mount from one local zone to another zone temprarily. You can go to the directory and verify that the data which is listed on /export/backup folder is show on sun02 directory.

5. In the last step you need to restart the NFS service on the global zone so the configuration files and other changes makes affect. But these configuration are available until we are not taking reboot of the zone.

Please comment on the post, if you have any issue regarding the NFS mount sharing process. I will try to resolve such issue as soon as possible.

Sunday, February 26, 2017

zone: error: net0: failed to create VNIC: operation not supported

In this post, I will discuss with you one of the most interesting error which I am facing when I boot the local zone on Solaris 11.3. The description of this interesting issue as describe below.

Description of error:

sun# zoneadm -z sun01 boot

zone 'sun01': error: net0: failed to create VNIC: operation not supported

zoneadm: zone sun01: call to zoneadmd(1M) failed: zoneadmd(1M

I have try to create and configure the  VNIC on Solaris 11.3 operating server but it get failed with the same error.

sun#dladm create-vnic -l net0 vnic01

dladm: vnic creation failed: operation not supported

If you are also facing a such error while booting the local zone on solaris 11 server, then please use the below solution to resolve such issues.

Solution of error:

1. This error "failed to create VNIC: operation not supported" would normally come when there are not enough mac addresses to assign to the zone. So now we need to add alternate mac addresses to the network interface.So before adding the new mac address we will stop LDM.

sun#ldm list-domain

NAME            STATE     FLAGS  CONS   VCPU MEMORY  UTIL NORM UPTIME

primary         active    -n-cv- UART   8    8G      2.0% 2.0% 41d 20h 14m

0004fb0000060000ff1d3d8336112f6f active    -n---- 5001   50   64G     0.1% 0.1% 18h 23m

2. Now log in to the Solaris global zone and check if net0 have additional MAC addresses or not. Please use the below command to check the status.

sun# dladm show-phys -m

LINK               SLOT    ADDRESS           INUSE CLIENT

net0               primary 0:21:f6:d6:d3:e5  yes  net0

                   1       0:14:4f:f9:6d:8d  no   --

                   2       0:14:4f:fb:10:2b  no   --

                   3       0:14:4f:f9:41:d6  no   --

                   4       0:14:4f:f8:dd:c8  no   --

net1               primary 0:21:f6:51:be:4d  yes  net1

3. Now zone will start without any issue, as we have assigned the new mac address to this zone.

sun# zoneadm -z sun01 start

Hope, your issue related to this has been resolved after reading my post. Please let me know if you are facing any issue regarding this error.

How to Create VNIC and Assign a IP Address on Solaris 11

Hello Friends,

In my old post, I described that how we will create a new local solaris zone on global zone. As you know every zone has their own networking but how it is works and configured we need to understand, So in this post, I will explain to you how to create a virtual network on Solaris 11 before zone creation.

For creating Virtual NIC and assigning fix static IP address in Solaris 11 we need to understand the basic difference between older version of Solaris and Solaris 11.

In Solaris 10, according to the NIC manufacturer,physical network interfaces are named as (Ex:bge,e1000g,nxge).But in Solaris 11 onwards,the names are hidden from the view and all the interfaces will be named as net0,net1…netx.

Before forward to main work we need to know that using which command we can check which interface has been mapped to physical interface. Using below command you can check all the network interface details.

sun01# dladm show-phys
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net0              Ethernet             up         1000   full      bge001
net1              Ethernet             up         1000   full      bge002

If you want to show all dladm level devices,including VNIC’s & aggregation links,use the below command-

sun01# dladm show-link
LINK                CLASS     MTU    STATE    OVER
net0                phys      1500   up       --
net1                phys      1500   up       --

In Solaris 11,you can give a meaning full description(net1/oracle_VIP) to all the IP address on the system unlike Solaris 10. (e1000g1:2)

How to assigning new IP address to NIC:-

1. We can see how we assigned IP address to the physical interface, so in the first step we will check all the physical interface using below command.

sun01# dladm show-phys
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net0              Ethernet             up         1000   full      bge001
net1              Ethernet             up         1000   full      bge002

2. In this step we would know how we will create a new interface so you can use the below command for this.

sun01# ipadm create-ip net1

3. This is the main step to assign the static ip address to the newly created interface net1. You can use below command or syntax to assign the fixed IP.

sun01#ipadm create-addr -T static -a local=10.135.0.2/24 net1

You can change your IP address accordingly. 

4. Now we will Verify whether IP address is configured or not which we assigned on above step.

sun01# ifconfig  net1
net1: flags=1000843 mtu 1500 index 7
        inet 10.135.0.2 netmask ffffff00 broadcast 10.135.0.255
        ether 0a:cB:12:8e:15:e2

If you see the above output, the new IP address is shown on net1 interface successfully. Using above all 4 steps you can know how we will assinged the static ip address to the sun solaris 11 operating system.

Now we will go for Virtual Network Interface creation steps.I can create N number of VNIC’s using single physical interface.These VNIC are treated as actual physical interface and possible to assign to local zones with  full access to it.

How to create a new VNIC using interface net2:-

1. In the initial steps I will run the same command to check out the all physical interface which is available on the Solaris 11 Server.

sun01# dladm show-phys
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net0              Ethernet             up         1000   full      bge001
net1              Ethernet             up         1000   full      bge002
net2              Ethernet             unknown    1000   full      bge003

If you see the above command output network interface "net2" is in unknow state, So in the next step I will create a new VNIC using net2.

2. In this step we will create a new VNIC using net2. In my case i will suppose VNIC name “vnic01”, so for creating the new VNIC we will run the below command which describe below.

sun01#dladm create-vnic  -l net2 vnic01

3.Now, We will plumb the virtual interface and create a default IP to new VNIC.

sun01# dladm create-vnic  -l net2 vnic01
sun01# ipadm create-ip vnic01
sun01# ifconfig vnic01
vnic01: flags=1000842 mtu 1500 index 8
        inet 0.0.0.0 netmask 0
        ether 2c:18:10:Ce:1a:12

4. Now we will assign the new IP address to VNIC and verfif that new IP for vnic01 is configured or not.

sun01# ipadm create-addr -T static -a local=10.135.0.3/24 vnic01
sun01# ifconfig vnic01
vnic01: flags=1000843 mtu 1500 index 8
        inet 10.135.0.3 netmask ffffff00 broadcast 10.135.0.255
        ether 2c:18:10:Ce:1a:12

5. In the final step we will try to run snoop on VNIC which we have create now. You can verify through snooping is that VNIC01 is working fine or not.

sun01# snoop -d vnic01

The same way you can create a multiple VNIC and assign a new IP address to it. Hope you got some idea about Solaris 11 networking part after reading this post. Please let me know if you have any doubt regarding this post.

Thursday, February 23, 2017

Sun Solaris File System Management

If you are working on Sun Solaris operating system and wants to more grow up on this operating system, you definitely like my post because in this post I will explain you the most important thing file system management.

I will describe all the file systems command in this post which is generally used on all Solaris version operating system. Using these command you are more familiar about the file system management on Solaris server.

How to create a new file system on Solaris Server:-

We are using two type of file system (UFS & ZFS) on Solaris operating system. In this example we are taking UFS file system as for ZFS file system creation information you can found in my older post.

  • Create a new file system-
          sun01#newfs /dev/rdsk/c0d0s1
  • View minfree value-
          sun01#fstyp -v /dev/rdsk/c0d0s1 | head
  • Set minfree value for a new file system-
          sun01#newfs -m 2 /dev/dsk/c0d0s1
  • Change minfree value on an existing file system-
          sun01#tunefs -m 1 /dev/rdsk/c0d0s1

In the above my example, my hard disk name is c0d0s1.

How to Monitoring File System Usage on Solaris Server:-

In Sun Solaris Server , you must be aware about the capacity of file system. You can check file system usage using several commands.
  • Display the capacity of file systems in readable format-
          sun01#df -h
  • Display the disk allocation size in Kbytes-
         sun01#df -k
  • Display the available space on a device or disk-
         sun01#df -k /dev/dsk/c0d0s1
  • Display the disk usage in readable format-
          sun01#du -h /home
  • Display the disk usage including files-
          sun01#du -ak /home
  • Display the disk usage in summary format-
          sun01#du -sk /home

How to Checking or repair the File System on Solaris Server:-

We can repair and check the file system on Sun Solaris server. You can use the "fsck" command for this one but make sure you never run the fsck command on a mounted file system. You need umount file system first then run repair or check the Solaris file system.

  • Check or repair an unmounted filesystem-
           sun01#fsck /dev/rdsk/c0d0s1
  • Check or repair using the mount point directory-
          sun01#fsck /export/home 
  • To use a backup superblock number on Solaris server-
          sun01#fsck -o b=32 /dev/rdsk/c0d0s1 
  • To use an alternative superblock number-
          sun01#fsck -o b=518432 /dev/rdsk/c0d0s1

Hope, you are like my this post. Please comment on the post if you are facing any issue. I will try to resolve your issue as soon as possible.

How to change a disk in SVM Solaris volume manager

Hope you are doing well at your end. This post documentation explain how to change a disk in SVM ( Solaris volume manager).It means that we have mirrored the disk (RAID1) using SVM. Solaris Volume manager is basically used for creating, modifying & partitioning the different RAID partition.

In this post, I will take my sparc server machine. My Sun Sparc server consist 2 hard disk, let us assume the server has 2 hard disk: c0t0d0 and c0t1d0. We will assume c0t0d0 failed and need to be replaced.

Step by Step Method as described below:

1. In the initial step, we will find that which hard disk is down or faulty. To check this we will used "format" command.

sun01# format
       0. c0t0d0 <__drive type unknown__>
          /pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0
       1. c0t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0

If you see the "format" command output, we have two disk "c0t0d0,c0t1d0". Disk "c0t0d0" is in faulty state and it's down, You can see the drive type is in unknown state.

2. If you are running the "metastat" command you can see the status of hard disk which is show status in "need maintenance".

sun01# metastat    # will show status in "need maintenance"

        flags           first blk       block count
     a        u         16              8192            /dev/dsk/c0t0d0s7
     a        u         8208            8192            /dev/dsk/c0t0d0s7
     a    p  luo        16              8192            /dev/dsk/c0t1d0s7
     a    p  luo        8208            8192            /dev/dsk/c0t1d0s7

sun01# metadb -d c0t0d0s7
sun01# metadb

Using above command we delete metadb on failing disk and check that metadb on the other disk.

4. Now, we will unconfigure the corresponding disk which is down right now. This is the main step of this post so you can more careful while running the command which is given below.

sun01# cfgadm -al
sun01# cfgadm -f -c unconfigure c0::dsk/c0t0d0
sun01# cfgadm -al

Using cfgadm command we will unconfigure the c0t0d0 faulty hard disk so that we will replace the down hard disk in next step.

5. Now you can change the faulty disk to new one and reconfigure it again with same name.

sun01# cfgadm -c configure c0::dsk/c0t0d0
sun01# cfgadm -al

6. In this steo we will duplicate partitionning schema of first disk to the second and create metadb.

sun01# prtvtoc /dev/rdsk/c0t1d0s2 | fmthard -s - /dev/rdsk/c0t0d0s2
sun01# metadb -a -f -c2 /dev/dsk/c0t0d0s7

7. In second last step you will run all the below command for replacement of the failinf SVM partition.

sun01# metastat
sun01# metareplace -ef d4 c0t0d0s4
sun01# metareplace -ef d3 c0t0d0s3
sun01# metareplace -ef d1 c0t0d0s1
sun01# metareplace -ef d0 c0t0d0s0
sun01# metareplace -ef d5 c0t0d0s5
sun01# metareplace -ef d6 c0t0d0s6
sun01# metasync d0
sun01# metasync d1
sun01# metasync d3
sun01# metasync d4
sun01# metasync d5
sun01# metasync d6
sun01# metasync d7

8. In final step you need to make the disk bootable so that operating system will be boot on the mirror disk.

sun01# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0

Using this step you can make a disk bootable. I hope this post is useful for you. You are requested to please comment on the post if you have any issue, I will try to back you with my answer.

How to Remove a Non-Global Zone from Solaris Server

In this post, I will explain to you how to remove a Non-Global Zone from Solaris Server.

As you aware that Non-Global zone are hosted on Global zone on Solaris Operating system. You can check the Non-Global zone list using "zoneadm" command. It will show you are running and installed zones on Global zones.

Step by Step Method of removal a Non-Global Zone:

First of all you need to check the Non-Global Zone list to ensure which zone is running on the server.

global# zoneadm list -iv

You will see a display that is similar to the following:

ID  NAME     STATUS       PATH                           BRAND      IP
 0    global       running         /                                   solaris    shared
 1    sun01       running         /zones/sun01                solaris    shared

In the above command output you can see the Non-Global Zone "sun01" is running, which we need to remove or delete from Solaris Server.

Now, we need to shutdown the required zone which we need to delete. We can shutdown the Non-Global zone using below commands.
--------------------------------------------
global#zoneadm -z sun01 halt
or
global#zoneadm -z sun01 shutdown
or
global#zlogin sun01 shutdown
-------------------------------------------
In next step when your Non-Global zone shutdown you need to uninstall the local zone. You can used the below method to uninstall the Non-Global Zone.

global#zoneadm -z sun01 uninstall

Using above command Non-Global zone "sun01" has been uninstall successfully.

In the last step you need to remove or delete all dataset and configuration files of Non-Global zone "sun01" from Global zone.

global#zonecfg -z sun01 delete

Using above command all the configuration files related to this Non-Global zone has been deleted successfully. Now you can remove the folder related to this zone.

So using above method we can remove or delete the Non-Global zone from global zone or Solaris Operating system. Please let me know if you are facing any issue during using this process.

Monday, February 20, 2017

How to Increase or extend the size of a Linux LVM by adding a new hard disk

Hello Friends,

This post will cover how to increase the disk space for a VMware virtual machine running Linux that is using logical volume manager (LVM). Firstly we will add a new disk to the virtual machine and then extend the original LVM over this additional space.

As there are a number of different ways to increase disk space but I have also posted simple and step by step method. Using this method I am sure you will increase the space easily.

1. First of all before adding any hard disk please run the below command to check the how much space is currently left on the VG group.

#vgdisplay 









If you see above image you can found the volume group name, in my case it is "rootvg".

2. Now run the below command to see the existing LVM disk is currently used in LVM.

#fdisk -l














You can see the above output, the /dev/sda2 hard disk is in used for existing LVM. Now you can add new hard disk on the server.

3. Once you added the new hard disk on the server to increase the size on Linux LVM, lets assume the new hard disk labelled is /dev/sdb. In this step we will need to partition the new hard disk so we can use it.

#fdisk /dev/sdb

It should show us below message to us for next selection.

root@localhost:~# fdisk /dev/sdb
Command (m for help): n

Please select the "n" for adding a new partition. Once we will select the "n" for new partition it is show to us below screen.

Command action
   e   extended
   p   primary partition (1-4)p

We will select the "p" for primary partition so we will add new had disk /dev/sdb as a primary partition.
----------------------------------------------------------------------------------------
Partition number (1-4): 1

First cylinder (1-2610, default 1): "enter"
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610): "enter"
Using default value 2610
----------------------------------------------------------------------------------------
In above screen, we will select 1 for primary partition and make a default settings.

‘t’ is selected to change to a partitions system ID, in this case we change to ’1′ automatically as this is currently our only partition.
----------------------------------------------------------------------------------------
Command (m for help): t
Selected partition 1
----------------------------------------------------------------------------------------
The hex code ’8e’ was entered as this is the code for a Linux LVM which is what we want this partition to be, as we will be joining it with the original Linux LVM which is currently using /dev/sda4.
----------------------------------------------------------------------------------------
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)
----------------------------------------------------------------------------------------
‘w’ is used to write the table to disk and exit, all changes that have been done will be saved and then you will be exited from fdisk.
----------------------------------------------------------------------------------------
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
----------------------------------------------------------------------------------------

By using “fdisk -l” now you will be able to see that /dev/sdb1 is listed, this is the new partition created on our newly added /dev/sdb disk.

4. Now we will create a physical volume with this newly added hard disk /dev/sdb1. For physical volume creation we will use "pvcreate" command.

#pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.

In above command output, you can see the /dev/sdb1 physical volume group has been created.

5. Now the most important steps of this post is using physical volume you can extend or create a new volume group.

If you want to extend the logical volume size of existing Volume group then please use the below command and method. 

#vgextend test /dev/sdb1

In my case my VG name is "test" so I will extend the space in existing VG "test".

If you want to create the new logical voume and want to add physical volume in new VG, please use the below command and method.

#vgcreate rootvg /dev/sdb1

In this case my new VG name is "rootvg".

So, as per my this post, you can add new hard disk in to existing volume group or newly created volume group. Using this volume group you can create or extend the Linux LVM size.