Difference between revisions of "LVMify the disk"
(One intermediate revision by the same user not shown) | |||
Line 310: | Line 310: | ||
/dev/vg00/var /var ext3 defaults 0 0 |
/dev/vg00/var /var ext3 defaults 0 0 |
||
Also, comment out the line that lists /tmp on a tmpfs (RAM-backed) filesystem. |
Also, comment out the line that lists /tmp on a tmpfs (RAM-backed) filesystem. |
||
+ | == see if it worked == |
||
+ | Now is time to reboot, log in again, and see if it's all working as expected. After rebooting, /home, /tmp, /usr, and /var should be mounted from LVM logical volumes. You'll see them listed as <code>/dev/mapper/vg00-{home,tmp,usr,var}</code> in the output of <code>mount</code> or <code>df</code>. If something isn't working correctly, removing the LVM entries from <code>/etc/fstab</code> and rebooting again will leave you with just the root filesystem, still intact, on <code>/dev/sda1</code>. |
||
+ | = clean up / = |
||
+ | Linux has the wonderful capability of mounting a filesytem in more than one place. Since we now have /home, /tmp, /usr, and /var on LVM storage, there are still a bunch of files on /dev/sda1 under those directories. They aren't visible, but they are taking up space. Let's free up that space now. |
||
+ | == making another copy of / visible == |
||
+ | To clean up, we need to get at the directories that have been used as mount points. Let's just mount /dev/sda1 again in another place and see what happens: |
||
+ | root@canopus:~# mount /dev/sda1 /mnt |
||
+ | root@canopus:~# df /usr /mnt/usr |
||
+ | Filesystem 1K-blocks Used Available Use% Mounted on |
||
+ | /dev/mapper/vg00-usr 193504 172504 11172 94% /usr |
||
+ | /dev/sda1 516040 312288 177540 64% /mnt |
||
+ | root@canopus:~# |
||
+ | From this, it's clear that the /mnt/usr is on /dev/sda1, while the /usr directory is on the LVM storage. Let's proceed to clean up. |
||
+ | == a tool that needs no introduction, rm == |
||
+ | /dev/sda1 is mounted in two places: / and /mnt. Since /usr is completely different from /mnt/usr, we'll just go remove all the stuff under /mnt/usr. The <code>df</code> before and after serve to show where the space is being freed. |
||
+ | root@canopus:~# df / /mnt /usr |
||
+ | Filesystem 1K-blocks Used Available Use% Mounted on |
||
+ | /dev/sda1 516040 312288 177540 64% / |
||
+ | /dev/sda1 516040 312288 177540 64% /mnt |
||
+ | /dev/mapper/vg00-usr 193504 172504 11172 94% /usr |
||
+ | root@canopus:~# cd /mnt/usr |
||
+ | root@canopus:/mnt/usr# rm -rf * |
||
+ | root@canopus:/mnt/usr# df / /mnt /usr |
||
+ | Filesystem 1K-blocks Used Available Use% Mounted on |
||
+ | /dev/sda1 516040 156260 333568 32% / |
||
+ | /dev/sda1 516040 156260 333568 32% /mnt |
||
+ | /dev/mapper/vg00-usr 193504 172504 11172 94% /usr |
||
+ | root@canopus:/mnt/usr# |
||
+ | Repeat for /mnt/home, /mnt/tmp, and /mnt/var and we'll wind up like so: |
||
+ | root@canopus:/mnt/usr# rm -rf /mnt/home/* /mnt/tmp/* /mnt/var/* |
||
+ | root@canopus:/mnt/usr# df |
||
+ | Filesystem 1K-blocks Used Available Use% Mounted on |
||
+ | rootfs 516040 88168 401660 18% / |
||
+ | none 59980 56 59924 1% /dev |
||
+ | /dev/sda1 516040 88168 401660 18% / |
||
+ | tmpfs 62864 0 62864 0% /lib/init/rw |
||
+ | tmpfs 62864 0 62864 0% /dev/shm |
||
+ | /dev/mapper/vg00-home |
||
+ | 64496 4136 57084 7% /home |
||
+ | /dev/mapper/vg00-tmp 129008 16460 105996 14% /tmp |
||
+ | /dev/mapper/vg00-usr 193504 172504 11172 94% /usr |
||
+ | /dev/mapper/vg00-var 193504 106760 76916 59% /var |
||
+ | /dev/sda1 516040 88168 401660 18% /mnt |
||
+ | root@canopus:/mnt/usr# |
||
+ | Lots of room left on / now, but /usr is getting pretty cramped. |
||
+ | == a trivial cleanup == |
||
+ | As mentioned earlier, I like leaving the directories that serve as mount point with 000 permissions. We can do that to the /home, /tmp, /usr, and /var (and perhaps a couple of others) directories on /dev/sda1 while it is mounted a second time on /mnt: |
||
+ | root@canopus:/mnt# cd /mnt |
||
+ | root@canopus:/mnt# ls -l |
||
+ | total 96 |
||
+ | d--------- 2 root root 4096 Jan 3 21:39 altroot |
||
+ | drwxr-xr-x 2 root root 4096 Jan 3 17:11 bin |
||
+ | drwxr-xr-x 2 root root 4096 Jan 3 17:13 boot |
||
+ | drwxr-xr-x 5 root root 4096 Dec 31 17:57 dev |
||
+ | drwxr-xr-x 52 root root 4096 Jan 4 17:22 etc |
||
+ | drwxr-xr-x 2 root root 4096 Nov 13 12:51 home |
||
+ | lrwxrwxrwx 1 root root 33 Dec 31 17:57 initrd.img -> boot/initrd.img-2.6.32-5-kirkwood |
||
+ | drwxr-xr-x 9 root root 8192 Jan 3 17:12 lib |
||
+ | drwx------ 2 root root 16384 Dec 30 23:28 lost+found |
||
+ | drwxr-xr-x 2 root root 4096 Dec 30 23:51 media |
||
+ | drwxr-xr-x 2 root root 4096 Nov 13 12:51 mnt |
||
+ | drwxr-xr-x 2 root root 4096 Dec 30 23:51 opt |
||
+ | drwxr-xr-x 2 root root 4096 Nov 13 12:51 proc |
||
+ | drwx------ 3 root root 4096 Jan 3 17:02 root |
||
+ | drwxr-xr-x 2 root root 4096 Jan 3 17:12 sbin |
||
+ | drwxr-xr-x 2 root root 4096 Jul 21 08:36 selinux |
||
+ | drwxr-xr-x 2 root root 4096 Dec 30 23:51 srv |
||
+ | drwxr-xr-x 2 root root 4096 Nov 14 22:42 sys |
||
+ | drwxrwxrwt 2 root root 4096 Dec 31 17:58 tmp |
||
+ | drwxr-xr-x 2 root root 4096 Jan 4 17:39 usr |
||
+ | drwxr-xr-x 2 root root 4096 Jan 4 17:42 var |
||
+ | lrwxrwxrwx 1 root root 30 Dec 31 17:57 vmlinuz -> boot/vmlinuz-2.6.32-5-kirkwood |
||
+ | root@canopus:/mnt# chmod 0 /mnt/home /mnt/proc /mnt/selinux /mnt/sys /mnt/tmp /mnt/usr /mnt/var |
||
+ | root@canopus:/mnt# ls -l |
||
+ | total 96 |
||
+ | d--------- 2 root root 4096 Jan 3 21:39 altroot |
||
+ | drwxr-xr-x 2 root root 4096 Jan 3 17:11 bin |
||
+ | drwxr-xr-x 2 root root 4096 Jan 3 17:13 boot |
||
+ | drwxr-xr-x 5 root root 4096 Dec 31 17:57 dev |
||
+ | drwxr-xr-x 52 root root 4096 Jan 4 17:22 etc |
||
+ | d--------- 2 root root 4096 Nov 13 12:51 home |
||
+ | lrwxrwxrwx 1 root root 33 Dec 31 17:57 initrd.img -> boot/initrd.img-2.6.32-5-kirkwood |
||
+ | drwxr-xr-x 9 root root 8192 Jan 3 17:12 lib |
||
+ | drwx------ 2 root root 16384 Dec 30 23:28 lost+found |
||
+ | drwxr-xr-x 2 root root 4096 Dec 30 23:51 media |
||
+ | drwxr-xr-x 2 root root 4096 Nov 13 12:51 mnt |
||
+ | drwxr-xr-x 2 root root 4096 Dec 30 23:51 opt |
||
+ | d--------- 2 root root 4096 Nov 13 12:51 proc |
||
+ | drwx------ 3 root root 4096 Jan 3 17:02 root |
||
+ | drwxr-xr-x 2 root root 4096 Jan 3 17:12 sbin |
||
+ | d--------- 2 root root 4096 Jul 21 08:36 selinux |
||
+ | drwxr-xr-x 2 root root 4096 Dec 30 23:51 srv |
||
+ | d--------- 2 root root 4096 Nov 14 22:42 sys |
||
+ | d--------- 2 root root 4096 Dec 31 17:58 tmp |
||
+ | d--------- 2 root root 4096 Jan 4 17:39 usr |
||
+ | d--------- 2 root root 4096 Jan 4 17:42 var |
||
+ | lrwxrwxrwx 1 root root 30 Dec 31 17:57 vmlinuz -> boot/vmlinuz-2.6.32-5-kirkwood |
||
+ | root@canopus:/mnt# |
||
+ | Those directories in / still look fine: |
||
+ | root@canopus:/mnt# ls -l / |
||
+ | total 84 |
||
+ | d--------- 2 root root 4096 Jan 3 21:39 altroot |
||
+ | drwxr-xr-x 2 root root 4096 Jan 3 17:11 bin |
||
+ | drwxr-xr-x 2 root root 4096 Jan 3 17:13 boot |
||
+ | drwxr-xr-x 15 root root 2700 Jan 1 1970 dev |
||
+ | drwxr-xr-x 52 root root 4096 Jan 4 17:22 etc |
||
+ | drwxr-xr-x 3 root root 4096 Jan 3 22:09 home |
||
+ | lrwxrwxrwx 1 root root 33 Dec 31 17:57 initrd.img -> boot/initrd.img-2.6.32-5-kirkwood |
||
+ | drwxr-xr-x 9 root root 8192 Jan 3 17:12 lib |
||
+ | drwx------ 2 root root 16384 Dec 30 23:28 lost+found |
||
+ | drwxr-xr-x 2 root root 4096 Dec 30 23:51 media |
||
+ | drwxr-xr-x 22 root root 4096 Jan 3 21:39 mnt |
||
+ | drwxr-xr-x 2 root root 4096 Dec 30 23:51 opt |
||
+ | dr-xr-xr-x 61 root root 0 Jan 1 1970 proc |
||
+ | drwx------ 3 root root 4096 Jan 3 17:02 root |
||
+ | drwxr-xr-x 2 root root 4096 Jan 3 17:12 sbin |
||
+ | d--------- 2 root root 4096 Jul 21 08:36 selinux |
||
+ | drwxr-xr-x 2 root root 4096 Dec 30 23:51 srv |
||
+ | drwxr-xr-x 12 root root 0 Jan 1 1970 sys |
||
+ | drwxr-xr-x 3 root root 4096 Jan 3 22:17 tmp |
||
+ | drwxr-xr-x 11 root root 4096 Jan 3 22:18 usr |
||
+ | drwxr-xr-x 14 root root 4096 Jan 3 22:19 var |
||
+ | lrwxrwxrwx 1 root root 30 Dec 31 17:57 vmlinuz -> boot/vmlinuz-2.6.32-5-kirkwood |
||
+ | root@canopus:/mnt# |
||
+ | Now, we're done with that. Let's show one of the LVM's best features: adding space to an almost full filesystem. |
||
+ | = Growing /usr = |
||
+ | /usr was 94% full the last time we ran <code>df</code> on it. Let's make some more room. This is a two step process. The first is to grow the block device (an LVM logical volume) that contains the filesystem, and then the filesystem is grown to use all available space there. |
||
+ | == Expanding a logical volume == |
||
+ | /usr is held in /dev/vg00/usr (or /dev/mapper/vg00-usr). We can add one logical extent's space to it (64Mbytes in this volume group. Logical extent size of a volume group can be shown by looking at the output of <code>vgdisplay</code>) like so: |
||
+ | root@canopus:~# df /usr |
||
+ | Filesystem 1K-blocks Used Available Use% Mounted on |
||
+ | /dev/mapper/vg00-usr 193504 172504 11172 94% /usr |
||
+ | root@canopus:~# lvdisplay /dev/vg00/usr |
||
+ | --- Logical volume --- |
||
+ | LV Name /dev/vg00/usr |
||
+ | VG Name vg00 |
||
+ | LV UUID Zw41zR-edTF-Il0F-1re1-wqzr-M8YW-VGOzgf |
||
+ | LV Write Access read/write |
||
+ | LV Status available |
||
+ | # open 1 |
||
+ | LV Size 192.00 MiB |
||
+ | Current LE 3 |
||
+ | Segments 1 |
||
+ | Allocation inherit |
||
+ | Read ahead sectors auto |
||
+ | - currently set to 256 |
||
+ | Block device 254:3 |
||
+ | |||
+ | root@canopus:~# lvextend --verbose --extents +1 /dev/vg00/usr |
||
+ | Finding volume group vg00 |
||
+ | Archiving volume group "vg00" metadata (seqno 12). |
||
+ | Extending logical volume usr to 256.00 MiB |
||
+ | Found volume group "vg00" |
||
+ | Found volume group "vg00" |
||
+ | Loading vg00-usr table (254:3) |
||
+ | Suspending vg00-usr (254:3) with device flush |
||
+ | Found volume group "vg00" |
||
+ | Resuming vg00-usr (254:3) |
||
+ | Creating volume group backup "/etc/lvm/backup/vg00" (seqno 13). |
||
+ | Logical volume usr successfully resized |
||
+ | root@canopus:~# lvdisplay /dev/vg00/usr |
||
+ | --- Logical volume --- |
||
+ | LV Name /dev/vg00/usr |
||
+ | VG Name vg00 |
||
+ | LV UUID Zw41zR-edTF-Il0F-1re1-wqzr-M8YW-VGOzgf |
||
+ | LV Write Access read/write |
||
+ | LV Status available |
||
+ | # open 1 |
||
+ | LV Size 256.00 MiB |
||
+ | Current LE 4 |
||
+ | Segments 2 |
||
+ | Allocation inherit |
||
+ | Read ahead sectors auto |
||
+ | - currently set to 256 |
||
+ | Block device 254:3 |
||
+ | |||
+ | root@canopus:~# |
||
+ | Only the logical volume is bigger now. The filesystem does not know it has more free space. |
||
+ | == Growing an ext[234] filesystem == |
||
+ | Now that that block device is bigger, we need to tell the filesystem about it. For the ext[234] family of filesystems, this is done with the <code>resize2fs</code> tool. ext2, ext3, and ext4 filesystems can be grown while they are mounted. Shrinking them requires they be unmounted. Since growing is needed more often than shrinking, this is not too annoying. Here's the process for growing /usr to fill our newly expanded /dev/vg00/usr block device (and some <code>df</code> love to show what is has done): |
||
+ | root@canopus:~# df /usr |
||
+ | Filesystem 1K-blocks Used Available Use% Mounted on |
||
+ | /dev/mapper/vg00-usr 193504 172504 11172 94% /usr |
||
+ | root@canopus:~# /sbin/resize2fs /dev/vg00/usr |
||
+ | resize2fs 1.41.12 (17-May-2010) |
||
+ | Filesystem at /dev/vg00/usr is mounted on /usr; on-line resizing required |
||
+ | old desc_blocks = 1, new_desc_blocks = 1 |
||
+ | Performing an on-line resize of /dev/vg00/usr to 65536 (4k) blocks. |
||
+ | The filesystem on /dev/vg00/usr is now 65536 blocks long. |
||
+ | |||
+ | root@canopus:~# df /usr |
||
+ | Filesystem 1K-blocks Used Available Use% Mounted on |
||
+ | /dev/mapper/vg00-usr 259040 172504 76708 70% /usr |
||
+ | root@canopus:~# |
Latest revision as of 18:10, 4 January 2011
LVM? What's this all about, anyway?
LVM Linux's Logical Volume Manager. Wikipedia describes the Linux LVM here. Basically, it allows you to take a pool of one or more hard drives and create one or more virtual hard drives out of piecs of the real drives. These can then be resized at run-time, allowing the admin to grow (or sometimes) shrink filesystems while the machine is up and running. This page covers the moving of a hard-partitioned Seagate DockStar running Debian to an LVM-based setup.
Before LVM-ification
Got a backup? You could bork your machine pretty hard if something bad happens. You've been warned.
Here's how the DockStar's USB flash drive looks now with regard to filesystem free space and partitioning:
root@canopus:~# fdisk -l -u /dev/sda Disk /dev/sda: 4000 MB, 4000317440 bytes 64 heads, 32 sectors/track, 3815 cylinders, total 7813120 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9a198ead Device Boot Start End Blocks Id System /dev/sda1 * 2048 1050623 524288 83 Linux /dev/sda2 1050624 2099199 524288 83 Linux /dev/sda3 2099200 7813119 2856960 8e Linux LVM root@canopus:~# df Filesystem 1K-blocks Used Available Use% Mounted on rootfs 516040 311876 177952 64% / none 60492 36 60456 1% /dev /dev/sda1 516040 311876 177952 64% / tmpfs 62864 0 62864 0% /lib/init/rw tmpfs 62864 0 62864 0% /dev/shm tmpfs 62864 0 62864 0% /tmp root@canopus:~# cat /etc/fstab # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> /dev/root / ext2 noatime,errors=remount-ro 0 1 /dev/sda2 none swap sw 0 0 tmpfs /tmp tmpfs defaults 0 0 root@canopus:~#
The current situation
As you can probably tell, the only filesystem is on /dev/sda1 and /dev/sda2 is used for swap space. /dev/sda3 is not used for anything at this point.
The desired end state
I'd like to end up with /dev/sda1 used purely for system configuration and needed-for-boot files. Swap space, /var, /home, /tmp, and /usr will all live in LVM managed block devices. /dev/sda2 will be converted from swap space to a working backup copy of the current root filesystem and used for repairing the real system should the sysadmin suffer a severe bout of brain-damage and break the thing really bad.
Based on lots of use of du
, I believe 512Mbytes for swap, 64Mbytes for /home, 128Mbytes for /tmp, 192Mbytes for /usr, and 192Mbytes for /var will be a good target configuration.
Creating an LVM physical volume
Some data structures need to be placed on a physical device that LVM uses. This is done with the pvcreate
program. /dev/sda3
is where we'll put the LVM managed logical volumes, so pvcreate
is invoked so (in test mode first, to be sure our command line is sane):
root@canopus:~# pvcreate --test --verbose --metadatatype 2 --pvmetadatacopies 2 --zero y /dev/sda3 Test mode: Metadata will NOT be updated. Set up physical volume for "/dev/sda3" with 5713920 available sectors Zeroing start of device /dev/sda3 Physical volume "/dev/sda3" successfully created Test mode: Wiping internal cache Wiping internal VG cache root@canopus:~# pvcreate --verbose --metadatatype 2 --pvmetadatacopies 2 --zero y /dev/sda3 Set up physical volume for "/dev/sda3" with 5713920 available sectors Zeroing start of device /dev/sda3 Physical volume "/dev/sda3" successfully created root@canopus:~#
Creating a volume group
Now that there is a physical volume, we can create a volume group. We're specifying that this volume group will not be clustered, is an LVM2 volume group (the tools support the older LVM1 format, too), a physical extent size of 64Mibibytes (this is the size of the pieces that go into any given logical volume. Since the backing store for this volume group is about 3Gibibytes in size, a 64Mibibyte gives us 43 pieces of that to play with), a volume group name of vg00, and that /dev/sda3 is our physical backing device. Again, test mode is used to ensure sanity of command line before really creating the volume group.
root@canopus:~# vgcreate --test --verbose --clustered n --metadatatype 2 --physicalextentsize 64M vg00 /dev/sda3 Test mode: Metadata will NOT be updated. Wiping cache of LVM-capable devices Wiping cache of LVM-capable devices Adding physical volume '/dev/sda3' to volume group 'vg00' Test mode: Skipping archiving of volume group. Test mode: Skipping volume group backup. Volume group "vg00" successfully created Test mode: Wiping internal cache Wiping internal VG cache root@canopus:~# vgcreate --verbose --clustered n --metadatatype 2 --physicalextentsize 64M vg00 /dev/sda3 Wiping cache of LVM-capable devices Wiping cache of LVM-capable devices Adding physical volume '/dev/sda3' to volume group 'vg00' Creating directory "/etc/lvm/archive" Archiving volume group "vg00" metadata (seqno 0). Creating directory "/etc/lvm/backup" Creating volume group backup "/etc/lvm/backup/vg00" (seqno 1). Volume group "vg00" successfully created root@canopus:~#
Making /dev/sda2 available
/dev/sda2 is currently used as swap space. It's the same size as /dev/sda1. I want to deactivate the swap space here and make a copy of the root filesystem. First, though, there should be a different swap space make available.
Making our first logical volume: /dev/vg00/swap
Creating a logical volume is done with the lvcreate
command. You want to decide a few things before creating one: how big should it be? What name should it have? Which volume group does it belong to?
Here's what I did to make /dev/vg00/swap:
root@canopus:~# lvcreate --test --verbose --size 512M --name swap --zero y vg00 Test mode: Metadata will NOT be updated. Setting logging type to disk Finding volume group "vg00" Test mode: Skipping archiving of volume group. Creating logical volume swap Test mode: Skipping volume group backup. Found volume group "vg00" Aborting. Failed to activate new LV to wipe the start of it. Found volume group "vg00" Unable to deactivate failed new LV. Manual intervention required. Test mode: Wiping internal cache Wiping internal VG cache root@canopus:~# lvcreate --verbose --size 512M --name swap --zero y vg00 Setting logging type to disk Finding volume group "vg00" Archiving volume group "vg00" metadata (seqno 5). Creating logical volume swap Creating volume group backup "/etc/lvm/backup/vg00" (seqno 6). Found volume group "vg00" Creating vg00-swap Loading vg00-swap table (254:0) Resuming vg00-swap (254:0) Clearing start of logical volume "swap" Creating volume group backup "/etc/lvm/backup/vg00" (seqno 6). Logical volume "swap" created root@canopus:~#
Reconfiguring swap location
After this logical volume has been created, we'll put the swap space signature on it using mkswap
, activate it with swapon
, deactivate the old one (swapoff
), and update /etc/fstab to save the changes across a reboot:
root@canopus:~# mkswap /dev/vg00/swap mkswap: /dev/vg00/swap: warning: don't erase bootbits sectors on whole disk. Use -f to force. Setting up swapspace version 1, size = 524284 KiB no label, UUID=76b70850-c3ab-4512-b55b-0545f2690130 root@canopus:~# swapon /dev/vg00/swap root@canopus:~# swapoff /dev/sda2 root@canopus:~# sed -e'/^\/dev\/sda2/s/^/# /' /etc/fstab > /tmp/fstab root@canopus:~# echo '/dev/vg00/swap none swap sw 0 0' >> /tmp/fstab root@canopus:~# install -o root -g root -m 644 /tmp/fstab /etc/fstab root@canopus:~# cat /etc/fstab # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> /dev/root / ext2 noatime,errors=remount-ro 0 1 # /dev/sda2 none swap sw 0 0 tmpfs /tmp tmpfs defaults 0 0 /dev/vg00/swap none swap sw 0 0 root@canopus:~#
Just to be sure /dev/sda2 is not being used any more as swap space, have a look at /proc/swaps:
root@canopus:~# cat /proc/swaps Filename Type Size Used Priority /dev/dm-0 partition 524280 0 -1 root@canopus:~#
/dev/sda2 is no longer being used for swap space and we can now put a backup root filesystem there.
Make a backup root filesystem
Now that we've stopped using /dev/sda2 for swap, let's put our backup root FS there. We'll format it with mkfs.ext2, mount it someplace convenient, and copy our files over.
create an ext2 filesystem on /dev/sda2
root@canopus:~# mkfs.ext2 -T ext2 /dev/sda2 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 32768 inodes, 131072 blocks 6553 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=134217728 4 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304 Writing inode tables: done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 24 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@canopus:~#
populate the backup
I like to use /altroot
for this purpose. My normal idiom for creating the directory underlying a mount point is to give it 000 (or d---------) permissions. Makes it obvious to me that something should be mounted there:
root@canopus:~# mkdir -m 0 /altroot root@canopus:~# mount /dev/sda2 /altroot root@canopus:~# cd / root@canopus:/# find . -xdev -depth -print0 | cpio -pdmv0a /altroot [...] root@canopus:~# umount /altroot
With this done, you can use /dev/sda2 as your root filesystem by changing the "root=/dev/sda1" kernel command line prarameter (set in the bootloader) to "root=/dev/sda2".
Move stuff to LVM
Now for the good stuff. Our swap space is on an LVM logical volume. Next we'll move /home, /tmp, /usr, and /var to LVM.
create more logical volumes
We need a place to put our filesystems. We've already decided on sizes, so let's use lvcreate to make them:
root@canopus:/# lvcreate --test --verbose --size 64M --name home --zero y vg00 Test mode: Metadata will NOT be updated. Setting logging type to disk Finding volume group "vg00" Test mode: Skipping archiving of volume group. Creating logical volume home Test mode: Skipping volume group backup. Found volume group "vg00" Aborting. Failed to activate new LV to wipe the start of it. Found volume group "vg00" Unable to deactivate failed new LV. Manual intervention required. Test mode: Wiping internal cache Wiping internal VG cache root@canopus:/# lvcreate --verbose --size 64M --name home --zero y vg00 Setting logging type to disk Finding volume group "vg00" Archiving volume group "vg00" metadata (seqno 8). Creating logical volume home Creating volume group backup "/etc/lvm/backup/vg00" (seqno 9). Found volume group "vg00" Creating vg00-home Loading vg00-home table (254:1) Resuming vg00-home (254:1) Clearing start of logical volume "home" Creating volume group backup "/etc/lvm/backup/vg00" (seqno 9). Logical volume "home" created root@canopus:/#
Repeat this for /tmp (128Mbytes), /usr (192Mbytes), and /var (192Mbytes).
create filesystems to hold our files
We've got logical volumes now. Let's put filesystems on them so we can keep our files there.
root@canopus:/# for lvol in /dev/vg00/{home,tmp,usr,var}; do mkfs.ext2 -t ext3 -T default ${lvol}; done mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 4096 inodes, 16384 blocks 819 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=16777216 1 block group 32768 blocks per group, 32768 fragments per group 4096 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 29 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 8192 inodes, 32768 blocks 1638 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=33554432 1 block group 32768 blocks per group, 32768 fragments per group 8192 inodes per group Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 29 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 12288 inodes, 49152 blocks 2457 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=50331648 2 block groups 32768 blocks per group, 32768 fragments per group 6144 inodes per group Superblock backups stored on blocks: 32768 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 34 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 12288 inodes, 49152 blocks 2457 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=50331648 2 block groups 32768 blocks per group, 32768 fragments per group 6144 inodes per group Superblock backups stored on blocks: 32768 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 21 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@canopus:/#
Copy existing files to new filesystems
We've created filesystems to hold our stuff. Now we need to put our stuff in there. Here's a little shell script procedure to do just that. This script's output is quite long, se I'll just skip showing it.
root@canopus:/# for fs in home tmp usr var; do mount -v /dev/vg00/${fs} /mnt; cd /${fs}; find . -xdev -depth -print0 | cpio -pdmv0a /mnt; cd /; umount -v /mnt; done
update /etc/fstab
Add the following lines to /etc/fstab. This ensures our new filesystems will be mounted and used the next time this machine reboots.
/dev/vg00/home /home ext3 defaults 0 0 /dev/vg00/tmp /tmp ext3 defaults 0 0 /dev/vg00/usr /usr ext3 defaults 0 0 /dev/vg00/var /var ext3 defaults 0 0
Also, comment out the line that lists /tmp on a tmpfs (RAM-backed) filesystem.
see if it worked
Now is time to reboot, log in again, and see if it's all working as expected. After rebooting, /home, /tmp, /usr, and /var should be mounted from LVM logical volumes. You'll see them listed as /dev/mapper/vg00-{home,tmp,usr,var}
in the output of mount
or df
. If something isn't working correctly, removing the LVM entries from /etc/fstab
and rebooting again will leave you with just the root filesystem, still intact, on /dev/sda1
.
clean up /
Linux has the wonderful capability of mounting a filesytem in more than one place. Since we now have /home, /tmp, /usr, and /var on LVM storage, there are still a bunch of files on /dev/sda1 under those directories. They aren't visible, but they are taking up space. Let's free up that space now.
making another copy of / visible
To clean up, we need to get at the directories that have been used as mount points. Let's just mount /dev/sda1 again in another place and see what happens:
root@canopus:~# mount /dev/sda1 /mnt root@canopus:~# df /usr /mnt/usr Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg00-usr 193504 172504 11172 94% /usr /dev/sda1 516040 312288 177540 64% /mnt root@canopus:~#
From this, it's clear that the /mnt/usr is on /dev/sda1, while the /usr directory is on the LVM storage. Let's proceed to clean up.
a tool that needs no introduction, rm
/dev/sda1 is mounted in two places: / and /mnt. Since /usr is completely different from /mnt/usr, we'll just go remove all the stuff under /mnt/usr. The df
before and after serve to show where the space is being freed.
root@canopus:~# df / /mnt /usr Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 516040 312288 177540 64% / /dev/sda1 516040 312288 177540 64% /mnt /dev/mapper/vg00-usr 193504 172504 11172 94% /usr root@canopus:~# cd /mnt/usr root@canopus:/mnt/usr# rm -rf * root@canopus:/mnt/usr# df / /mnt /usr Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 516040 156260 333568 32% / /dev/sda1 516040 156260 333568 32% /mnt /dev/mapper/vg00-usr 193504 172504 11172 94% /usr root@canopus:/mnt/usr#
Repeat for /mnt/home, /mnt/tmp, and /mnt/var and we'll wind up like so:
root@canopus:/mnt/usr# rm -rf /mnt/home/* /mnt/tmp/* /mnt/var/* root@canopus:/mnt/usr# df Filesystem 1K-blocks Used Available Use% Mounted on rootfs 516040 88168 401660 18% / none 59980 56 59924 1% /dev /dev/sda1 516040 88168 401660 18% / tmpfs 62864 0 62864 0% /lib/init/rw tmpfs 62864 0 62864 0% /dev/shm /dev/mapper/vg00-home 64496 4136 57084 7% /home /dev/mapper/vg00-tmp 129008 16460 105996 14% /tmp /dev/mapper/vg00-usr 193504 172504 11172 94% /usr /dev/mapper/vg00-var 193504 106760 76916 59% /var /dev/sda1 516040 88168 401660 18% /mnt root@canopus:/mnt/usr#
Lots of room left on / now, but /usr is getting pretty cramped.
a trivial cleanup
As mentioned earlier, I like leaving the directories that serve as mount point with 000 permissions. We can do that to the /home, /tmp, /usr, and /var (and perhaps a couple of others) directories on /dev/sda1 while it is mounted a second time on /mnt:
root@canopus:/mnt# cd /mnt root@canopus:/mnt# ls -l total 96 d--------- 2 root root 4096 Jan 3 21:39 altroot drwxr-xr-x 2 root root 4096 Jan 3 17:11 bin drwxr-xr-x 2 root root 4096 Jan 3 17:13 boot drwxr-xr-x 5 root root 4096 Dec 31 17:57 dev drwxr-xr-x 52 root root 4096 Jan 4 17:22 etc drwxr-xr-x 2 root root 4096 Nov 13 12:51 home lrwxrwxrwx 1 root root 33 Dec 31 17:57 initrd.img -> boot/initrd.img-2.6.32-5-kirkwood drwxr-xr-x 9 root root 8192 Jan 3 17:12 lib drwx------ 2 root root 16384 Dec 30 23:28 lost+found drwxr-xr-x 2 root root 4096 Dec 30 23:51 media drwxr-xr-x 2 root root 4096 Nov 13 12:51 mnt drwxr-xr-x 2 root root 4096 Dec 30 23:51 opt drwxr-xr-x 2 root root 4096 Nov 13 12:51 proc drwx------ 3 root root 4096 Jan 3 17:02 root drwxr-xr-x 2 root root 4096 Jan 3 17:12 sbin drwxr-xr-x 2 root root 4096 Jul 21 08:36 selinux drwxr-xr-x 2 root root 4096 Dec 30 23:51 srv drwxr-xr-x 2 root root 4096 Nov 14 22:42 sys drwxrwxrwt 2 root root 4096 Dec 31 17:58 tmp drwxr-xr-x 2 root root 4096 Jan 4 17:39 usr drwxr-xr-x 2 root root 4096 Jan 4 17:42 var lrwxrwxrwx 1 root root 30 Dec 31 17:57 vmlinuz -> boot/vmlinuz-2.6.32-5-kirkwood root@canopus:/mnt# chmod 0 /mnt/home /mnt/proc /mnt/selinux /mnt/sys /mnt/tmp /mnt/usr /mnt/var root@canopus:/mnt# ls -l total 96 d--------- 2 root root 4096 Jan 3 21:39 altroot drwxr-xr-x 2 root root 4096 Jan 3 17:11 bin drwxr-xr-x 2 root root 4096 Jan 3 17:13 boot drwxr-xr-x 5 root root 4096 Dec 31 17:57 dev drwxr-xr-x 52 root root 4096 Jan 4 17:22 etc d--------- 2 root root 4096 Nov 13 12:51 home lrwxrwxrwx 1 root root 33 Dec 31 17:57 initrd.img -> boot/initrd.img-2.6.32-5-kirkwood drwxr-xr-x 9 root root 8192 Jan 3 17:12 lib drwx------ 2 root root 16384 Dec 30 23:28 lost+found drwxr-xr-x 2 root root 4096 Dec 30 23:51 media drwxr-xr-x 2 root root 4096 Nov 13 12:51 mnt drwxr-xr-x 2 root root 4096 Dec 30 23:51 opt d--------- 2 root root 4096 Nov 13 12:51 proc drwx------ 3 root root 4096 Jan 3 17:02 root drwxr-xr-x 2 root root 4096 Jan 3 17:12 sbin d--------- 2 root root 4096 Jul 21 08:36 selinux drwxr-xr-x 2 root root 4096 Dec 30 23:51 srv d--------- 2 root root 4096 Nov 14 22:42 sys d--------- 2 root root 4096 Dec 31 17:58 tmp d--------- 2 root root 4096 Jan 4 17:39 usr d--------- 2 root root 4096 Jan 4 17:42 var lrwxrwxrwx 1 root root 30 Dec 31 17:57 vmlinuz -> boot/vmlinuz-2.6.32-5-kirkwood root@canopus:/mnt#
Those directories in / still look fine:
root@canopus:/mnt# ls -l / total 84 d--------- 2 root root 4096 Jan 3 21:39 altroot drwxr-xr-x 2 root root 4096 Jan 3 17:11 bin drwxr-xr-x 2 root root 4096 Jan 3 17:13 boot drwxr-xr-x 15 root root 2700 Jan 1 1970 dev drwxr-xr-x 52 root root 4096 Jan 4 17:22 etc drwxr-xr-x 3 root root 4096 Jan 3 22:09 home lrwxrwxrwx 1 root root 33 Dec 31 17:57 initrd.img -> boot/initrd.img-2.6.32-5-kirkwood drwxr-xr-x 9 root root 8192 Jan 3 17:12 lib drwx------ 2 root root 16384 Dec 30 23:28 lost+found drwxr-xr-x 2 root root 4096 Dec 30 23:51 media drwxr-xr-x 22 root root 4096 Jan 3 21:39 mnt drwxr-xr-x 2 root root 4096 Dec 30 23:51 opt dr-xr-xr-x 61 root root 0 Jan 1 1970 proc drwx------ 3 root root 4096 Jan 3 17:02 root drwxr-xr-x 2 root root 4096 Jan 3 17:12 sbin d--------- 2 root root 4096 Jul 21 08:36 selinux drwxr-xr-x 2 root root 4096 Dec 30 23:51 srv drwxr-xr-x 12 root root 0 Jan 1 1970 sys drwxr-xr-x 3 root root 4096 Jan 3 22:17 tmp drwxr-xr-x 11 root root 4096 Jan 3 22:18 usr drwxr-xr-x 14 root root 4096 Jan 3 22:19 var lrwxrwxrwx 1 root root 30 Dec 31 17:57 vmlinuz -> boot/vmlinuz-2.6.32-5-kirkwood root@canopus:/mnt#
Now, we're done with that. Let's show one of the LVM's best features: adding space to an almost full filesystem.
Growing /usr
/usr was 94% full the last time we ran df
on it. Let's make some more room. This is a two step process. The first is to grow the block device (an LVM logical volume) that contains the filesystem, and then the filesystem is grown to use all available space there.
Expanding a logical volume
/usr is held in /dev/vg00/usr (or /dev/mapper/vg00-usr). We can add one logical extent's space to it (64Mbytes in this volume group. Logical extent size of a volume group can be shown by looking at the output of vgdisplay
) like so:
root@canopus:~# df /usr Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg00-usr 193504 172504 11172 94% /usr root@canopus:~# lvdisplay /dev/vg00/usr --- Logical volume --- LV Name /dev/vg00/usr VG Name vg00 LV UUID Zw41zR-edTF-Il0F-1re1-wqzr-M8YW-VGOzgf LV Write Access read/write LV Status available # open 1 LV Size 192.00 MiB Current LE 3 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:3 root@canopus:~# lvextend --verbose --extents +1 /dev/vg00/usr Finding volume group vg00 Archiving volume group "vg00" metadata (seqno 12). Extending logical volume usr to 256.00 MiB Found volume group "vg00" Found volume group "vg00" Loading vg00-usr table (254:3) Suspending vg00-usr (254:3) with device flush Found volume group "vg00" Resuming vg00-usr (254:3) Creating volume group backup "/etc/lvm/backup/vg00" (seqno 13). Logical volume usr successfully resized root@canopus:~# lvdisplay /dev/vg00/usr --- Logical volume --- LV Name /dev/vg00/usr VG Name vg00 LV UUID Zw41zR-edTF-Il0F-1re1-wqzr-M8YW-VGOzgf LV Write Access read/write LV Status available # open 1 LV Size 256.00 MiB Current LE 4 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:3 root@canopus:~#
Only the logical volume is bigger now. The filesystem does not know it has more free space.
Growing an ext[234] filesystem
Now that that block device is bigger, we need to tell the filesystem about it. For the ext[234] family of filesystems, this is done with the resize2fs
tool. ext2, ext3, and ext4 filesystems can be grown while they are mounted. Shrinking them requires they be unmounted. Since growing is needed more often than shrinking, this is not too annoying. Here's the process for growing /usr to fill our newly expanded /dev/vg00/usr block device (and some df
love to show what is has done):
root@canopus:~# df /usr Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg00-usr 193504 172504 11172 94% /usr root@canopus:~# /sbin/resize2fs /dev/vg00/usr resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/vg00/usr is mounted on /usr; on-line resizing required old desc_blocks = 1, new_desc_blocks = 1 Performing an on-line resize of /dev/vg00/usr to 65536 (4k) blocks. The filesystem on /dev/vg00/usr is now 65536 blocks long. root@canopus:~# df /usr Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg00-usr 259040 172504 76708 70% /usr root@canopus:~#