Siddharth Rana https://www.linuxtechi.com Sat, 18 Jan 2020 14:47:22 +0000 en-US hourly 1 https://www.linuxtechi.com/wp-content/uploads/2020/02/cropped-linuxtechi-favicon-32x32.png Siddharth Rana https://www.linuxtechi.com 32 32 Xen hypervisor: Assign Virtual Disk to a Virtual Machine https://www.linuxtechi.com/multipath-virtual-disks-on-xen-hypervisors/ https://www.linuxtechi.com/multipath-virtual-disks-on-xen-hypervisors/#respond Sun, 20 Jul 2014 06:54:39 +0000 http://www.linuxtechi.com/?p=2345 Dealing with  our daily admin tasks we face lot of requests where we need to assign virtual disks or additional disks to Virtual Machines running on Xen hypervisor . Xen hypervisor have lot of “xm” commands that can be used to manage the functionality  of ... Read more

The post Xen hypervisor: Assign Virtual Disk to a Virtual Machine first appeared on LinuxTechi.]]>
Dealing with  our daily admin tasks we face lot of requests where we need to assign virtual disks or additional disks to Virtual Machines running on Xen hypervisor .

Xen hypervisor have lot of “xm” commands that can be used to manage the functionality  of Xen VM’s .

As we know that we can use xm block-attach commands to attach a new virtual disks to Xen VM .We have similar situation  here ,we need to add new disk via SAN storage to add on a VM ( either new disk or we can also use that virtual disk to append to existing filesystem via vgextend / resize2fs / lvextend)

Prerequisites : If you want to add disks via SAN storage LUN’s , Make sure the Xen hypervisor have SAN switch connections  and  HBA’s attached to them . You can check HBA via below command .

# lspci | grep -i HBA  

Above Command will list the Model and type of HBA on the system . Also , multipath package should be installed which give a configuration file multipath.conf and multipath service should be up and running.

# chkconfig multipathd on

# /etc/init.d/multipathd start

Method :

Below steps shows how we can scan and get the LUN’s listed in Xen hypervisor  .

Step:1  To Request a SAN LUN’s you have to provide wwpn numbers of the HBA attached to the system  Those WWPN for HBA are used to zone the storage with hostname .

To get details about he wwpn’s :

 #  cat /sys/class/fc_host/host?/port_name    (  where “?” is the port number of HBA )

Step:2 Once SAN LUN’s provisoned , use below commands to scan those on Xen hypervisor :

echo “- – -” > /sys/class/scsi_host/host0/scan

echo “- – -” > /sys/class/scsi_host/host1/scan

echo “- – -” > /sys/class/scsi_host/host2/scan

Step:3 Once the scan complete ( takes few sec only ) , the LUN’s get listed using  below command :

# multipath -ll 

This will list the newly assigned LUN’s as well as existing LUN’s if any .

Note: To set the LUN’s naming according to our  requirements we can edit multipath.conf file .

Then again reload multipath service : “/etc/init.d/multipathd reload“. This command will list the LUN’s names with new naming provided and also get those listed under /dev/mapper with new names .

Step:4  Now we need to attach the LUN’s to VM domain name :

Find out the VM name and domain id using below command :

# xm list

Choose the domain id and replace below domain-name with the one you choosed

# xm block-attach domain-name/Virtualmachinename   phy:/dev/mapper/lun_name  xvde w

where :  xvd<?>  is the available disk numbering .

Step:5  Once the above command run , append entries in VM conf file so that it will be persistent  on VM reboots .

`phy:/dev/mapper/lun_name,w`

Till now we worked on Xen hypervisior , now we have to login to virtual machine to see if the “dmesg” and “/var/log/messages” got the new disk entry .

 Check new disk with fdisk -l and Follow below steps :

1 )   Create partition  to newly attached virtual disk and format this with either ext3 or ext4 as per requirements  .

# fdisk /dev/xvde      //  ( Make partition 1 , like /dev/xvde1 )

Now , format the disk as :   mkfs.ext3  /dev/xvde1

2) Once format complete , put entry in fstab to mount the new drive or You can make this a part of existing LVM paritions , just do pvcreate , vgextend and then  resize2fs followed by lvextend command .

Please feel free to write comments to this post if you have queries and also add if any suggestions.

The post Xen hypervisor: Assign Virtual Disk to a Virtual Machine first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/multipath-virtual-disks-on-xen-hypervisors/feed/ 0
How to release space utilized by .nfs files under NFS https://www.linuxtechi.com/space-utilization-by-nfs-entries-under-nfs/ https://www.linuxtechi.com/space-utilization-by-nfs-entries-under-nfs/#respond Wed, 23 Apr 2014 02:09:23 +0000 http://www.linuxtechi.com/?p=2096 On Linux/UNIX systems if you delete a file that a currently running process still has open, the file isn’t really deleted or removed . Once the process closes the file, the OS then removes the file handle and free up the disk blocks.This process works ... Read more

The post How to release space utilized by .nfs files under NFS first appeared on LinuxTechi.]]>
On Linux/UNIX systems if you delete a file that a currently running process still has open, the file isn’t really deleted or removed . Once the process closes the file, the OS then removes the file handle and free up the disk blocks.This process works slightly other way when the file that is open and removed is on an NFS mounted filesystem. Since the process that has the file open is running on one machine (such as a workstation in location A ) and the files are on the file server, there has to be some way for the two machines to communicate information about this file. The way NFS does this is with the .nfsNNNNNN files. If you try to remove one of these file, and the file is still open, it will just reappear with a different number. So, in order to remove the file completely we must must kill that process which is currently using it.

Note: These .nfsNNN files are usually created by nfs client for its own operation functionalities. These files are used by nfs client to keep track of the files which are to be deleted when nfs client close the process. .nfsNNN files concept will disappear in NFSv4

You can list down these files by running ‘ ls -lah ‘ command from nfs share mounted directory, the process needs to be killed which has open these files in order to release space .

If you want to know which process has this file open, you can use below lsof command

$ lsof | grep -i .nfs1234

Example :

 $ echo testfile > foo
 $ tail -f foo
 testfile
 ^Z
 Suspended
 $ rm foo
 $ ls -A
 .nfsC13B
 $ rm .nfsC13B
 $ ls -A
 .nfsC13B

$ lsof .nfsC13B
 COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF    NODE NAME
 tail   1182 jack    0r  VREG  186,6   5 1200718   .nfsC13B%

Let’s suppose we create a file with the name “foo” under nfs mounted directory and then listing the content using tail command and after that we try to delete the file using rm command. Execute “ls -A” command to list .nfsNNN file and if we try to delete .nfsNN file then nfs client will automatically create .nfs file.

So once you find .nfsNN file and want to delete it permanently then killed that process that has the file open, the .nfs file will go away automatically. In the above example, when you kill the tail process i.e PID 1182 , the .nfsC13B file will disappear.

To kill a process in Linux distributions use kill command, so to kill a process of PID 1182, execute the beneath command,

$ sudo kill -9 1182
The post How to release space utilized by .nfs files under NFS first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/space-utilization-by-nfs-entries-under-nfs/feed/ 0
LVM : A good way to utilize disks space https://www.linuxtechi.com/lvm-good-way-to-utilize-disks-space/ https://www.linuxtechi.com/lvm-good-way-to-utilize-disks-space/#respond Sun, 06 Apr 2014 04:45:31 +0000 http://www.linuxtechi.com/?p=2019 Scenario : Let say , You have 5 disks each with 20 GB , now you have requirements to create FS(file system) of size 25 GB and 35GB and let rest of the space unassigned for any future needs . Technically, this is not possible ... Read more

The post LVM : A good way to utilize disks space first appeared on LinuxTechi.]]>
Scenario :

Let say , You have 5 disks each with 20 GB , now you have requirements to create FS(file system) of size 25 GB and 35GB and let rest of the space unassigned for any future needs .

Technically, this is not possible to allocate space as each disk is only 20 GB in size and we can’t use them for 25GB and 35GB partitions . So, we will use LVM(Logical Volume Manager), merge all the disks in to one large pool (volume group) and then use space from that pool for further allocation .

LVM is to serve requests where number of disks with small size can be put into a pool which can be further utilized to allocate space as per needs . Another advantage , if the pools is going to be full/packed then we can add additional disks ( pvcreate then vgextend following lvextend ) later to increase the LVM volume group size and Logical Volume sizes .

This merging will provide total space around 100 GB , out of that space we can allocate 60 GB ( 25GB+35GB) for our needs .

Logical Steps To create LVM Partitions :

Step:1 Create Physical volume

This contains the number of disks which will be later merged to get a big size of disk in one pool.

# pvcreate <disk-name> <disk-name>

Step:2 Create Volume group

This is the name of the pool which have total amount disks size which later be assigned to Logical volumes to create partitions of varying size .

# vgcreate <volumegroupname> <disks which needs to be part of this volume group name>

Step:3 Create Logical Volume

# lvcreate -L <size for lvm> -n <lvmname> <volume-groupname>

The partition of this much size will be created in below mentioned way :

/dev/vg_name/lv_name

You can now format it with ext3/ext4 FS to make it writable and mountable .

Additional notes : You can rename VG and LV names easily but make sure they are unmounted before you do these operations .

# vgrename <old_vg_name> <new_vg_name>
# lvrename <vg_name> <old_lv_name> <new_lv_name>

We will keep posting for any additional docs on LVM .

Read Also : How to extend or grow lvm partition in linux with lvextend command

The post LVM : A good way to utilize disks space first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/lvm-good-way-to-utilize-disks-space/feed/ 0
Fixing LVM I/O Errors https://www.linuxtechi.com/fixing-lvm-io-errors/ https://www.linuxtechi.com/fixing-lvm-io-errors/#comments Tue, 11 Mar 2014 02:49:21 +0000 http://www.linuxtechi.com/?p=1864 Situation : Most of us faced errors mentioned below during our system admin activities   , this is mainly related to the removable storage media that we use on unix servers’ Reason of this could be because removing the disk/LUN’s without clean shutdown/unmount or de-attaching disks from ... Read more

The post Fixing LVM I/O Errors first appeared on LinuxTechi.]]>
Situation :

Most of us faced errors mentioned below during our system admin activities   , this is mainly related to the removable storage media that we use on unix servers’

Reason of this could be because removing the disk/LUN’s without clean shutdown/unmount or de-attaching disks from LV’s  .

/dev/sdf: read failed after 0 of 4096 at 0: Input/output error

/dev/sdf: read failed after 0 of 4096 at 3298534817792: Input/output error

/dev/sdf: read failed after 0 of 4096 at 3298534875136: Input/output error

/dev/sdf: read failed after 0 of 4096 at 4096: Input/output error

/dev/sdk: read failed after 0 of 4096 at 0: Input/output error

/dev/sdk: read failed after 0 of 4096 at 6442385408: Input/output error

/dev/sdk: read failed after 0 of 4096 at 6442442752: Input/output error

/dev/sdk: read failed after 0 of 4096 at 4096: Input/output error

 Solution :

1) Check which Volume Group have the issue , run “vgscan” command .

2) Find out the Logical Volumes attached with that Volume Group .

3) Inactive the logical volumes as :

  # lvchange -an <lv-name>

4) Inactive Volume group as :

 #  vgchange -an <vg-name>

5) Again Scan Volume group using “vgscan” .

6) Now activate the Volume Group :

# vgchange -ay <volume-group-name>

7) Run command “lvscan” , the error should be gone now .

8) Now activate the Logical Volume Name :

# lvchange -ay <lv-name>

Note : In some cases if we want to use same media device again which was removed unclean but its still giving error after attaching then  we need to deattach the removable device manually  for a while  and then attach again , later  follow above steps .

The post Fixing LVM I/O Errors first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/fixing-lvm-io-errors/feed/ 12