Szukaj na tym blogu

środa, 8 czerwca 2011

LVM stands for logical volume manager is a solution that allows you to dynamically manage volume capacity on linux machine.
RAID technology is used to provide fault tolerance and increase speed of I/O operations on disks.

When you prepare standard RHEL5 server installation and have 2 disks for operating system storage you usually use LVM + RAID1.
If you have only 1 disk you usually use only LVM.

LVM

Technical explanation of this term is here, please read this carefully.Before you start reading this post, please be familiar with above link which explains LVM briefly.
Before LVM, you had no easy way to increase or reduce the size of a partition after Linux was installed. With LVM2, you can even create read-write snapshots; but this is not part of the current exam requirements, so this book won't be addressing that feature.

For example, if you find that you have extra space on the /home directory partition and need more space on your /var directory partition for log files, LVM will let you reallocate the space. Alternatively, if you are managing a server on a growing network, new users will be common. You may reach the point at which you need more room on your /home directory partition. With LVM, you can add a new physical disk and allocate its storage capacity to an existing /home directory partition.
IMPORTANT !!!
While LVM can be an important tool to manage partitions, it does not by itself provide redundancy. Do not use it as a substitute for RAID. However, you can use LVM in concert with a properly configured RAID array.


Creating a Physical Volume

The first step in creating an LVM is to start with a physical disk. If you have a freshly installed hard disk, you can set up a PV on the entire disk. For example, if that hard disk is attached as the third PATA hard disk (/dev/hdc), and you haven't configured partitions on the drive, you'd run the following command:
# pvcreate /dev/hdc

Alternatively, you can set up a new PV on a properly formatted partition. For example, assume that you've added a new partition, /dev/hdc2. You could then use fdisk or parted to set it to the Linux LVM partition type. In fdisk, this corresponds to partition type 8e; in parted, it corresponds to lvm. The sequence of commands would look similar to the following:
# fdisk /dev/hdc

Command (m for help) : t
Partition number (1-4)
2
Partition ID (L to list options): 8e
Command (m for help) : w

Once your partition is ready, you can run the following command to create a new PV on that partition (/dev/hdc2) with the following command:
# pvcreate /dev/hdc2


Creating a Volume Group

Once you have two or more PVs, you can create a volume group (VG). In the following command, substitute the name of your choice for volumegroup:
# vgcreate volumegroup /dev/hdc2 /dev/hdd2

You can add more room to any VG. Assume there's an existing /dev/sda1 partition, using a Linux LVM type, and the pvcreate command has been applied to that partition. You can then add that partition to an existing VG with the following command:
# vgextend volumegroup /dev/sda1


Creating a Logical Volume

However, a new VG doesn't help you unless you can mount a filesystem on it. So you need to create a logical volume (LV) for this purpose. The following command creates an LV. You can add as many chunks of disk space (a.k.a. physical extents, or PEs) as you need.
# lvcreate -l number_of_PEs volumegroup -n logvol

This creates a device named /dev/volumegroup/logvol. You can format this device as if it were a regular disk partition, and then mount the directory of your choice on your new logical volume.

But this isn't useful if you don't know how much space is associated with each PE. You could use trial and error, using the df command to check the size of the volume after you've mounted a directory on it. Alternatively, you can use the -L switch to set a size in MB. For example, the following command creates an LV named flex of 200MB:
# lvcreate -L 200M volumegroup -n flex


Using a Logical Volume

But that's not the last step. You may not get full credit for your work on the exam unless the directory gets mounted on the LVM group when you reboot your Linux computer. Based on a standard RHEL /etc/fstab configuration file, one option is to add the following line to that file:
LABEL=/home/mj /home/mj ext3 defaults 1 2

Before this line can work, you'll need to set the label for this directory with the following command:
# e2label /dev/volumegroup/logvol /home/mj

Alternatively, you can just substitute the LVM device file such as /dev/VolGroup00/ LogVol03 for LABEL=/home/mj.

Before logical volumes are useful, you need to know how to add another LV. For example, if you've added more users, and they need more room than you have on the /home directory, you may need to add more LVs for other filesystems or resize the current /home directory LV.
IMPORTANT !!!
Linux can't read /boot files if they're installed on a Logical Volume. If you feel the need for special provisions for the /boot directory, try a RAID 1 array. However, there have been problems with that configuration as well.


Adding Another Logical Volume

Adding another LV is a straightforward process. For example, if you've just added a fourth SATA hard drive, it's known as device /dev/sdd. If you need more LVs for the /tmp directory, you'd follow these basic steps:

1. Add the new hard drive.

2. Configure the new hard drive with a tool such as fdisk or parted. Make sure new partitions correspond to the Linux LVM format. It's code 8e within fdisk, or flag lvm within parted. Alternatively, you can dedicate all space on the new hard drive as a physical volume (PV) with the pvcreate /dev/sdd command.

3. If you've created separate partitions, you can dedicate the space of a specific partition to a PV. If you don't already have an empty logical volume, you'll need to create more than one. For example, for the first partition /dev/sdd1, you can do this with the following command:
# pvcreate /dev/sdd1

4. Next, you'll want to create a volume group (VG) from one or more empty, properly configured partitions (or drives). One way to do this, assuming you have empty /dev/sdc3 and /dev/sdd1 partitions, is with the following command:
# vgcreate Volume01 /dev/sdc3 /dev/sdd1

5. Before proceeding, you should inspect the VG with the vgdisplay command.

6. You should now be able to add another LV with the lvcreate command. For example, the following command takes 20 Physical Extents (PEs) for the new LV, LogVol01:
# lvcreate -l 20 Volume01 -n LogVol01

7. You've added a new LV. Naturally, you'll need to format and mount a directory on this LV before you can use it. For the example shown, you would use the following commands:

# mkfs.ext3 /dev/Volume01/LogVol01
# mount /dev/Volume01/LogVol01 /tmp



Removing a Logical Volume

Removing an existing LV requires a straightforward command. The basic command is lvremove. If you've created an LV in the previous section and want to remove it, the basic steps are simple. However, it will work only from a rescue environment such as the linux rescue mode described in Chapter 16, or from a CD/DVD-based system such as Knoppix or the new Fedora Live DVD.

1. Save any data in directories that are mounted on the LV.

2. Unmount any directories associated with the LV. Based on the example in the previous section, you would use the following command:
# umount /dev/Volume01/LogVol01

3. Apply the lvremove command to the LV with a command such as:
# lvremove /dev/Volume01/LogVol01

4. You should now have the PEs from this LV free for use in other LVs.



Resizing Logical Volumes

If you have an existing LV, you can add a newly created PV to extend the space available on your system. All it takes is appropriate use of the vgextend and lvextend commands. For example, if you want to add PEs to the VG associated with the aforementioned /home directory, you could take the following basic steps:

1. Back up any data existing on the /home directory.

2. Unmount the /home directory from the current LV.

3. Extend the VG to include the new hard drive or partitions that you've created. For example, if you want to add /dev/sdd1 to the /home VG, you would run the following command:
# vgextend Volume00 /dev/sdd1

4. Make sure the new partitions are included in the VG with the following command:
# vgdisplay Volume00

5. Extend the current LV to include the space you need. For example, if you want to extend the LV to 2000MB, you'd run the following command:
# lvextend -L 2000M /dev/Volume00/LogVol00
The lvextend command can help you configure LVs in KB, MB, GB, or even TB. For example, you could get the same result with the following command:

# lvextend -L 2G /dev/Volume00/LogVol00

6. Reformat and remount the LV, using commands described earlier, so your filesystem can take full advantage of the new space:

# mkfs.ext3 /dev/Volume00/LogVol00
# mount /dev/Volume00/LogVol00 /home


7. Once remounted, you can restore the information you backed up from the /home directory.


Converting LVM1 Filesystem to LVM2

The conversion process for LVM1 partitions is straightforward; the vgconvert command is designed to help. After making sure the associated LV is backed up and unmounted, you can convert an LVM1 filesystem, which might be named VolGroup00, with the following command:

# vgconvert -M2 VolGroup00

But this is a one-way process; despite what the man page might suggest, it's not possible in most cases with current tools to convert back to LVM1.





RAID

In order to fully understand what is RAID you should be familiar with this.
I'm not gonna explain how particular RAID levels work, because this you read in link above, instead I will explain how to configure it on RHEL5 and give you TOP TIPS when each RAID level should be used.

You have be aware that instead of RAID levels there are also RAID types :
- software RAID, is performed by the linux kernel
- hardware RAID, requires additional PCI card (e.g. adaptec vendor)
Of course rules are the same for software or hardware RAID but there is difference in implementation.

A Redundant Array of Independent Disks (RAID) is a series of disks that can save your data even if a catastrophic failure occurs on one of the disks. While some versions of RAID make complete copies of your data, others use the so-called parity bit to allow your computer to rebuild the data on lost disks.

Linux RAID has come a long way. A substantial number of hardware RAID products support Linux, especially those from name-brand PC manufacturers. Dedicated RAID hardware can ensure the integrity of your data even if there is a catastrophic physical failure on one of the disks. Alternatively, you can configure software-based RAID on multiple partitions on the same physical disk. While this can protect you from a failure on a specific hard drive sector, it does not protect your data if the entire physical hard drive fails.

Depending on definitions, RAID has nine or ten different levels, which can accommodate different levels of data redundancy. Combinations of these levels are possible. Several levels of software RAID are supported directly by RHEL: levels 0, 1, 5, and 6. Hardware RAID uses a RAID controller connected to an array of several hard disks. A driver must be installed to be able to use the controller. Most RAID is hardware based; when properly configured, the failure of one drive for almost all RAID levels (except RAID 0) does not destroy the data in the array.

Linux, meanwhile, offers a software solution to RAID. Once RAID is configured on a sufficient number of partitions, Linux can use those partitions just as it would any other block device. However, to ensure redundancy, it's up to you in real life to make sure that each partition in a Linux software RAID array is configured on a different physical hard disk.
On the Job

The RAID md device is a meta device. In other words, it is a composite of two or more other devices such as /dev/hda1 and /dev/hdb1 that might be components of a RAID array.

The following are the basic RAID levels supported on RHEL :

RAID 0

This level of RAID makes it faster to read and write to the hard drives. However, RAID 0 provides no data redundancy. It requires at least two hard disks.

Reads and writes to the hard disks are done in parallel-in other words, to two or more hard disks simultaneously. All hard drives in a RAID 0 array are filled equally. But since RAID 0 does not provide data redundancy, a failure of any one of the drives will result in total data loss. RAID 0 is also known as striping without parity.

RAID 1

This level of RAID mirrors information between two disks (or two sets of disks-see RAID 10). In other words, the same set of information is written to each disk. If one disk is damaged or removed, all of the data is stored on the other hard disk. The disadvantage of RAID 1 is that data has to be written twice, which can reduce performance. You can come close to maintaining the same level of performance if you also use separate hard disk controllers, which prevents the hard disk controller from becoming a bottleneck. RAID 1 is relatively expensive. To support RAID 1, you need an additional hard disk for every hard disk worth of data. RAID 1 is also known as disk mirroring.

RAID 4

While this level of RAID is not directly supported by the current Linux distributions associated with Red Hat, it is still supported by the current Linux kernel. RAID 4 requires three or more disks. As with RAID 0, data reads and writes are done in parallel to all disks. One of the disks maintains the parity information, which can be used to reconstruct the data. Reliability is improved, but since parity information is updated with every write operation, the parity disk can be a bottleneck on the system. RAID 4 is known as disk striping with parity.

RAID 5

Like RAID 4, RAID 5 requires three or more disks. Unlike RAID 4, RAID 5 distributes, or stripes, parity information evenly across all the disks. If one disk fails, the data can be reconstructed from the parity data on the remaining disks. RAID does not stop; all data is still available even after a single disk failure. RAID 5 is the preferred choice in most cases: the performance is good, data integrity is ensured, and only one disk's worth of space is lost to parity data. RAID 5 is also known as disk striping with parity.

RAID 6

RAID 6 literally goes one better than RAID 5. In other words, while it requires four or more disks, it has two levels of parity and can survive the failure of two member disks in the array.

RAID 10

I include RAID 10 solely to illustrate one way you can combine RAID levels. RAID 10 is a combination of RAID 0 and RAID 1, which requires a minimum of four disks. First, two sets of disks are organized in RAID 0 arrays, each with their own individual device file, such as /dev/md0 and /dev/md1. These devices are then mirrored. This combines the speed advantages of RAID 0 with the data redundancy associated with mirroring. There are variations: for example, RAID 01 stripes two sets of RAID 1 mirrors. RAID 50 provides a similar combination of RAID 0 and RAID 5.


Reviewing an Existing RAID Array

If you created a RAID array during the installation process, you'll see it in the /proc/ mdstat file. For example, I see the following on my system:
Yes, I know, this violates good practice, using RAID partitions from the same hard drive. But my personal resources (and I suspect many exam sites, despite the price) have limits. As you can see, this is a RAID 6 array, associated with device file md0, /dev/md0. You can find out more about this array with the following command:
As you can see, this is a RAID 6 array, which requires at least four partitions. It can handle the failure of two partitions. If there were a spare device, the number of Total Devices would exceed the number of Raid Devices.


Modifying an Existing RAID Array

Modifying an existing RAID array is a straightforward process. You can simulate a failure with the following command. (I suggest that you add --verbose to help you get as much information as possible.)

# mdadm --verbose /dev/md0 -f /dev/sda13 -r /dev/sda13
mdadm: set /dev/sda13 faulty in /dev/md0
mdadm: hot removed /dev/sda13

You can reverse the process; the same command can be used to add the partition of your choice to the array:

# mdadm --verbose /dev/md0 -a /dev/sda13
mdadm: re-added /dev/sda13

It makes sense to review the results after each command with cat /proc/mdstat or mdadm --detail /dev/md0.


Creating a New RAID Array

Creating a new RAID array is a straightforward process. The first step is to create RAID partitions. You can do so as described earlier using either parted or fdisk. In this section, I'll show you how to create a simple RAID 1 array of two partitions. I assume that there are two partitions already available: /dev/sdb1 and /dev/sdb2. Now create a simple array:

# mdadm --create --verbose /dev/md1 --level=1 \
--raid-devices=2 /dev/sdb1 /dev/sdb2
mdadm: size set to 97536k
mdadm: array /dev/md1 started.

Now it's time to format the new device, presumably to the default ext3 filesystem:

# mkfs.ext3 /dev/md1

You can now mount the filesystem of your choice on this array. Just remember that if you want to make this permanent, you'll have to add it to your /etc/fstab. For example, to make it work with /tmp, add the following directive to that file:
/dev/md1 /tmp ext3 defaults 0 0


Exercise : Mirroring the /home Partition with Software RAID

If you're making changes on a production computer, back up the data from the /tmp directory first. Otherwise, all user data in /tmp will be lost.

1.Mark the two partition IDs as type fd using the Linux fdisk utility. There are equivalent steps available in parted.

# fdisk /dev/hda
Command (m for help) : t
Partition number (1-5)
5

Partition ID (L to list options): fd
Command (m for help) : w
# fdisk /dev/hdb
Command (m for help) : t
Partition number (1-5)
5
Partition ID (L to list options): fd
Command (m for help) : w

2.Make sure to write the changes. The parted utility does it automatically; if you use fdisk, run partprobe or reboot commands.

3.Create a RAID array with the appropriate mdadm command. For /dev/hda5 and /dev/hdb5, you can create it with the following:

# mdadm --create /dev/md0 --level=1 --raid-devices=2 \
/dev/hda5 /dev/hdb5

4.Confirm the changes; run the following commands:

# cat /proc/mdstat
# mdadm --verbose /dev/md0

5.Now format the newly created RAID device:

# mkfs.ext3 /dev/md0

6.Now mount it on a test directory; I often create a test/ subdirectory in my home directory for this purpose:

# mount /dev/md0 /root/test

7.Next, copy all files from the current /home directory. Here's a simple method that copies all files and subdirectories of /home:

# cp -ar /home/. /root/test/

8.Unmount the test subdirectory:

# umount /dev/md0

9.Now you should be able to implement this change in /etc/fstab. Remember that during the exam, you may not get full credit for your work unless your Linux system mounts the directory on the RAID device. Based on the parameters described in this exercise, the directive would be
/dev/md0 /home ext3 defaults 0 0

10.Now reboot and see what happens. If the /home directory partition contains the files of your users, you've succeeded. Otherwise, remove the directive added in step 9 from /etc/fstab and reboot again.

Brak komentarzy:

Prześlij komentarz