6.1.3.5. ...using a raw, unpartitioned disk as a PV?
Although you can use a raw disk as a PV, it's not recommended. The graphical administration tools don't support it, and the amount of space lost to a partition table is minimal (about 1 KB).
6.1.3.6. ...a failing disk drive?
If you suspect that a disk drive is failing, and you want to save the data that is on that drive, you can add a replacement PV to your volume group, migrate the data off the failing (or slow or undersized) disk onto the new PV, and then remove the original disk from the volume group.
To migrate data off a specific PV, use the pvmove command:
# pvmove /dev/hda3
6.1.3.7. ...creating a flexible disk layout?
LVM is all about flexibilitybut for absolute maximum flexibility, divide your disk into multiple partitions and then add each partition to your volume group as a separate PV.
For example, if you have a 100 GB disk drive, you can divide the disk into five 20 GB partitions and use those as physical volumes in one volume group.
The advantage to this approach is that you can free up one or two of those PVs for use with another operating system at a later date. You can also easily switch to a RAID array by adding one (or more) disks, as long as 20 percent of your VG is free, with the following steps:
1. Migrate data off one of the PVs.
2. Remove that PV from the VG.
3. Remake that PV as a RAID device.
4. Add the new RAID PV back into the VG.
5. Repeat the process for the remaining PVs.
You can use this same process to change RAID levels (for example, switching from RAID-1 (mirroring) to RAID-5 (rotating ECC) when going from two disks to three or more disks).
6.1.4. Where Can I Learn More?
The manpages for lvm , vgcreate , vgremove , vgextend , vgreduce , vgdisplay , vgs , vgscan , vgchange , pvcreate , pvremove, pvmove , pvdisplay , pvs , lvcreate , lvremove , lvextend , lvreduce , lvresize , lvdisplay , lvs
The LVM2 Resource page: http://sourceware.org/lvm2/
A Red Hat article on LVM: http://www.redhat.com/magazine/009jul05/departments/red_hat_speaks/
Redundant Arrays of Inexpensive Disks (RAID) is a technology for boosting storage performance and reducing the risk of data loss due to disk error. It works by storing data on multiple disk drives and is well supported by Fedora. It's a good idea to configure RAID on any system used for serious work.
RAID can be managed by the kernel, by the kernel working with the motherboard BIOS, or by a separate computer on an add-in card. RAID managed by the BIOS is called dmraid ; while supported by Fedora Core, it does not provide any significant benefits over RAID managed solely by the kernel on most systems, since all the work is still performed by the main CPU.
Using dmraid can thwart data-recovery efforts if the motherboard fails and another motherboard of the same model (or a model with a compatible BIOS dmraid implementation) is not available.
Add-in cards that contain their own CPU and battery-backed RAM can reduce the load of RAID processing on the main CPU. However, on a modern system, RAID processing takes at most 3 percent of the CPU time, so the expense of a separate, dedicated RAID processor is wasted on all but the highest-end servers. So-called RAID cards without a CPU simply provide additional disk controllers, which are useful because each disk in a RAID array should ideally have its own disk-controller channel.
There are six "levels" of RAID that are supported by the kernel in Fedora Core, as outlined in Table 6-3.
Table 6-3. RAID levels supported by Fedora Core
RAID Level |
Description |
Protection against drive failure |
Write performance |
Read performance |
Number of drives |
Capacity |
Linear |
Linear/Append. Devices are concatenated together to make one large storage area (deprecated; use LVM instead). |
No. |
Normal. |
Normal |
2 |
Sum of all drives |
0 |
Striped. The first block of data is written to the first block on the first drive, the second block of data is written to the first block on the second drive, and so forth. |
No. |
Normal to normal multiplied by the number of drives, depending on application. |
Multiplied by the number of drives |
2 or more |
Sum of all drives |
1 |
Mirroring. All data is written to two (or more) drives. |
Yes. As long as one drive is working, your data is safe. |
Normal. |
Multiplied by the number of drives |
2 or more |
Equal to one drive |
4 |
Dedicated parity. Data is striped across all drives except that the last drive gets parity data for each block in that "stripe." |
Yes. One drive can fail (but any more than that will cause data loss). |
Reduced: two reads and one write for each write operation. The parity drive is a bottleneck. |
Multiplied by the number of drives minus one |
3 or more |
Sum of all drives except one |
5 |
Distributed parity. Like level 4, except that the drive used for parity is rotated from stripe to stripe, eliminating the bottleneck on the parity drive. |
Yes. One drive can fail. |
Like level 4, except with no parity bottleneck. |
Multiplied by the number of drives minus one |
3 or more |
Sum of all drives except one |
6 |
Distributed error-correcting code. Like level 5, but with redundant information on two drives. |
Yes. Two drives can fail. |
Same as level 5. |
Multiplied by the number of drives minus two |
4 or more |
Sum of all drives except two |
For many desktop configurations, RAID level 1 (RAID 1) is appropriate because it can be set up with only two drives. For servers, RAID 5 or 6 is commonly used.
Although Table 6-3 specifies the number of drives required by each RAID level, the Linux RAID system is usually used with disk partitions, so a partition from each of several disks can form one RAID array, and another set of partitions from those same drives can form another RAID array.
RAID arrays should ideally be set up during installation, but it is possible to create them after the fact. The mdadm command is used for all RAID administration operations; no graphical RAID administration tools are included in Fedora.
6.2.1.1. Displaying Information About the Current RAID Configuration
The fastest way to see the current RAID configuration and status is to display the contents of /proc/ mdstat :
$ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdc1[1] hda1[0]
102144 blocks [2/2] [UU]
md1 : active raid1 hdc2[1] hda3[0]
1048576 blocks [2/2] [UU]
md2 : active raid1 hdc3[1]
77023232 blocks [2/1] [_U]
This display indicates that only the raid1 ( mirroring) personality is active, managing three device nodes:
md0
Читать дальше